Anybody around with some ecommerce (woocommerce specifically) dev experience? If so, any chance I could pick your brain a little bit?
ecommerce in general, a fair amount. WooCommerce specifically, very little. I've looked at it and just barely escaped having to build a platform which interracted with it via the wordpress rest api. So depending on what you need, I may be able to help out.
Like legit, if you don't store that shit your PCI compliance is as simple as answering "we do not store credit cards" for the 200 questions they ask about if you're really sure you don't store credit cards.
Just don't do reoccurring payments and you're good.
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
Has anyone used the gdb debugger on OSX recently? (More specifically, OSX 10.14+ ??) I'm just getting back into programming after a long break and figured that a widely-used 30-year-old tool would be a piece of cake to use, but I've gotten some serious setup headaches so far. After spending hours googling I came across this, which at least got the thing working. But now arguments in quotations aren't parsed correctly ("arg 1" becomes "arg and 1" in the output example), which is likely caused by the removal of the
set startup-with-shell enable
line in my .gdbinit file, which was required in order to get gdb working in the first place.
Man I've been doing this for years and I still get upset when I deploy code out to QA and QA finds bugs. Not even necessarily at QA but man it's so frustrating.
Hmm. Scaling issue with Prometheus. We have some heatmaps set up in Grafana, but we seem to be unable to run that query in Prometheus because we have close to 100 pods from that one service, resulting in a fair pile of separate time-series to sort out, and the heatmap queries fail.
Anyone ran into this and found a fix? Prometheus can be configured to drop labels for certain metrics, but as far as I can tell it won't do any aggregation into single metrics.
I'm considering the option that it might be time to switch Prometheus to just do short-term retention and have something like M3DB for long-term aggregation and retention.
I'm of the opinion that changing single points of failure for multiple points of failure just becomes an extremely obtuse and difficult to troubleshoot complexity issue and doesn't actually solve downtime issues.
You just trade "site is offline" for "cloudflare can't talk to your stuff". Trying to achieve anything past 98% uptime when you're not a data center is just an exercise in futility.
Change my mind.
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
I'm of the opinion that changing single points of failure for multiple points of failure just becomes an extremely obtuse and difficult to troubleshoot complexity issue and doesn't actually solve downtime issues.
You just trade "site is offline" for "cloudflare can't talk to your stuff". Trying to achieve anything past 98% uptime when you're not a data center is just an exercise in futility.
Change my mind.
Weeeeeeell, we pretty much run the entire healthcare it infrastructure for the state of VT (and a bunch of upstate NY), including the only level 1 trauma center in the region.
I'm of the opinion that changing single points of failure for multiple points of failure just becomes an extremely obtuse and difficult to troubleshoot complexity issue and doesn't actually solve downtime issues.
You just trade "site is offline" for "cloudflare can't talk to your stuff". Trying to achieve anything past 98% uptime when you're not a data center is just an exercise in futility.
Change my mind.
Weeeeeeell, we pretty much run the entire healthcare it infrastructure for the state of VT (and a bunch of upstate NY), including the only level 1 trauma center in the region.
Since you're in the area: NextGen's IT is also trash change my mind lol
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
I'm of the opinion that changing single points of failure for multiple points of failure just becomes an extremely obtuse and difficult to troubleshoot complexity issue and doesn't actually solve downtime issues.
You just trade "site is offline" for "cloudflare can't talk to your stuff". Trying to achieve anything past 98% uptime when you're not a data center is just an exercise in futility.
Change my mind.
I mean, adding complexity without the understanding is gonna fail at least as much, yes.
You gotta do it right, and for some systems you gotta do it so get on that.
I'm of the opinion that changing single points of failure for multiple points of failure just becomes an extremely obtuse and difficult to troubleshoot complexity issue and doesn't actually solve downtime issues.
You just trade "site is offline" for "cloudflare can't talk to your stuff". Trying to achieve anything past 98% uptime when you're not a data center is just an exercise in futility.
Change my mind.
Weeeeeeell, we pretty much run the entire healthcare it infrastructure for the state of VT (and a bunch of upstate NY), including the only level 1 trauma center in the region.
Since you're in the area: NextGen's IT is also trash change my mind lol
Nah, I'm good. Luckily I only interact with it peripherally when some poor practice outside our umbrella is having integration issues.
You just trade "site is offline" for "cloudflare can't talk to your stuff". Trying to achieve anything past 98% uptime when you're not a data center is just an exercise in futility.
Change my mind.
We have some fluffly "ensure 100% cloud uptime" sentence in our corporate goals and nobody on the tech side of the company has any idea what that means. :rotate:
You just trade "site is offline" for "cloudflare can't talk to your stuff". Trying to achieve anything past 98% uptime when you're not a data center is just an exercise in futility.
Change my mind.
We have some fluffly "ensure 100% cloud uptime" sentence in our corporate goals and nobody on the tech side of the company has any idea what that means. :rotate:
I have been firing this quote - straight from google along with a few of google's own uptime requirements/sli stuff at people lately when they bring me this 100% uptime crazy talk. Many of google's are lower than that 99.999% as well.
100% is the wrong reliability target for basically everything. Perhaps a pacemaker is a good exception! But, in general, for any software service or system you can think of, 100% is not the right reliability target because no user can tell the difference between a system being 100% available and, let's say, 99.999% available.
I'm of the opinion that changing single points of failure for multiple points of failure just becomes an extremely obtuse and difficult to troubleshoot complexity issue and doesn't actually solve downtime issues.
You just trade "site is offline" for "cloudflare can't talk to your stuff". Trying to achieve anything past 98% uptime when you're not a data center is just an exercise in futility.
Change my mind.
I mean, adding complexity without the understanding is gonna fail at least as much, yes.
You gotta do it right, and for some systems you gotta do it so get on that.
I'd trust maybe like 1-2 of you here to do it well enough to understand it.
Everyone else that I've seen use meshes of microservices has delivered nothing but pain.
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
0
Monkey Ball WarriorA collection of mediocre hatsSeattle, WARegistered Userregular
edited December 2019
yeh Microservices! ... which is not a real thing so much as it is a direction on a spectrum... we use to just call this kind of thing SOA.
They are great on paper, but if your tools are not up to the task, you and/or the poor sucker left running ops for everything are absolutely going to hate how it turns out.
edit: So at a bare min you need great logging, monitoring, and tracing. But everyone's logs suck. The monitoring/alarming, even it even exists, is just checking for things you expect or have already happened once.
And I've never seen a good tracing setup.
This stuff isn't easy.
Monkey Ball Warrior on
"I resent the entire notion of a body as an ante and then raise you a generalized dissatisfaction with physicality itself" -- Tycho
Like with just about everything else we do, it's a balancing act. 1000 simple things to manage and keep in sync is no longer simple. 20 or even 10 may no longer be simple enough. But depending on the overall needs, 2 or 3 more moderate systems with looser coupling can make many things simpler to do than 1 giant, tightly coupled, mega-system.
But a giant monolith can work well for a long time. Instagram is still a giant monolithic Django site, for the most part, based on a recent (ish? not sure if it was a 2019 or 2018 thing) talk they gave about their deployment/ops stuff.
I've been doing some Python tutorials and I've run into a snag that seems very simply but I've not been able to figure out. The tutorial seems to jump between Python 2 and 3 without warning so it might be related. I also probably need to seek out a more consistent tutorial.
Regardless, anyone have any clue why this class would give me a "TabError: inconsistent use of tabs and spaces in indentation" on the name_pieces line ?
All the tabs are 2 spaces each and the editor (Brackets) doesn't flag any of the indents. I'm missing something obvious I'm sure but I don't realize what =p.
EDIT - ugh...been messing with this a while and as SOON as I post it, I figured it out. Apparently my bottom part only needed one indention.
I've been doing some Python tutorials and I've run into a snag that seems very simply but I've not been able to figure out. The tutorial seems to jump between Python 2 and 3 without warning so it might be related. I also probably need to seek out a more consistent tutorial.
Regardless, anyone have any clue why this class would give me a "TabError: inconsistent use of tabs and spaces in indentation" on the name_pieces line ?
All the tabs are 2 spaces each and the editor (Brackets) doesn't flag any of the indents. I'm missing something obvious I'm sure but I don't realize what =p.
EDIT - ugh...been messing with this a while and as SOON as I post it, I figured it out. Apparently my bottom part only needed one indention.
Glad we could be your rubber duck!
Programmer hot tip: buy an actual, physical rubber duck. Keep it on your desk. Next time you're stuck on a problem...explain the problem to the duck. Out loud. Ignore all of the crazy looks your coworkers give you.
Half of the time, this will bring you closer to the answer than you were before.
Like with just about everything else we do, it's a balancing act. 1000 simple things to manage and keep in sync is no longer simple. 20 or even 10 may no longer be simple enough. But depending on the overall needs, 2 or 3 more moderate systems with looser coupling can make many things simpler to do than 1 giant, tightly coupled, mega-system.
But a giant monolith can work well for a long time. Instagram is still a giant monolithic Django site, for the most part, based on a recent (ish? not sure if it was a 2019 or 2018 thing) talk they gave about their deployment/ops stuff.
I just have never really seen it work unless it's a huge company that can devote the resources to manage it (amazon/google).
Every time someone tries to get fancy with these little things (I'm mostly talking about situations that move away from single points of failure, not really necessarily microservices) it's always just dozens of points of failure that happen to all get hit by the same bug and whoops away you go and now you're spending 3/4 of a day fixing the issue that would've been as simple as "spin up another vm, restore the backup from a few hours ago" that would take all of 30 minutes.
bowen on
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
Spawnbroker - I've heard this is a thing and I will take your advice! I know so many times I will get the answer to an issue that has bugged me ALL day while I'm zoned out on the drive home. I'd rather zone out on the duck and figure it out that day .
A big problem is many don't bother to ask "why microservices" before chopping an app up. When it's chopped up logically, it's easier to make it work, as a swath of related functionality moving together. It's when people get too atomic or don't put context into how they split that shit goes sideways. Which is most of the time.
I'm a .net website developer, so it makes sense that my current project would be an Electron app, with node.js, react/redux, and custom sass.
I just spent the day getting the project set up in Azure Devops, and I have working builds now! The buzzwords collection on my resume grew three sizes today!
Posts
ecommerce in general, a fair amount. WooCommerce specifically, very little. I've looked at it and just barely escaped having to build a platform which interracted with it via the wordpress rest api. So depending on what you need, I may be able to help out.
Just don't do reoccurring payments and you're good.
Third-party payment processors ftw. Now we just have to deal with their shitty API instead!
Don’t receive them at any point ever. Use a payment gateway that isn’t dumb and handles it.
Please, that would mean I would have to explicitly return from the function like a peasant.
Rube Goldberg machine or bust.
Spoiler: in the end I kept them as generators not coroutine.
I made a game, it has penguins in it. It's pay what you like on Gumroad.
Currently Ebaying Nothing at all but I might do in the future.
:rotate:
let totalScore = entries.reduce(into: 0) { score, entry in score += entry.score }Like I said, I just can't ever remember the syntax on those.
I'll give it a shot tomorrow! Thanks!!
/ ^ (?= ( 0 | 1 (?![0-9]*0) | 2 (?![0-9]*[01]) | 3 (?![0-9]*[012]) | 4 (?![0-9]*[0123]) | 5 (?![0-9]*[01234]) | 6 (?![0-9]*[012345]) | 7 (?![0-9]*[0123456]) | 8 (?![0-9]*[01234567]) | 9 (?![0-9]*[012345678]) ){6} ) (?= [0-9]* (?<b> [0-9] ) (?<adj> (?!(?P=b)) [0-9]) (?P=adj) (?!(?P=adj)) ) [0-9]{6} $ /mxhttps://github.com/lambda-dojo-sofia/all-the-things/blob/master/2019-12-16/re/part2
*continues to low-key rant about how we need tracing*
Fun times.
Anyone ran into this and found a fix? Prometheus can be configured to drop labels for certain metrics, but as far as I can tell it won't do any aggregation into single metrics.
I'm considering the option that it might be time to switch Prometheus to just do short-term retention and have something like M3DB for long-term aggregation and retention.
Edit: Also, labels cardinality is a real performance killer, so yeah, theoretically if you can reduce it you can probably squeak some time out of it.
You just trade "site is offline" for "cloudflare can't talk to your stuff". Trying to achieve anything past 98% uptime when you're not a data center is just an exercise in futility.
Change my mind.
Weeeeeeell, we pretty much run the entire healthcare it infrastructure for the state of VT (and a bunch of upstate NY), including the only level 1 trauma center in the region.
Since you're in the area: NextGen's IT is also trash change my mind lol
I mean, adding complexity without the understanding is gonna fail at least as much, yes.
You gotta do it right, and for some systems you gotta do it so get on that.
Nah, I'm good. Luckily I only interact with it peripherally when some poor practice outside our umbrella is having integration issues.
We have some fluffly "ensure 100% cloud uptime" sentence in our corporate goals and nobody on the tech side of the company has any idea what that means. :rotate:
I have been firing this quote - straight from google along with a few of google's own uptime requirements/sli stuff at people lately when they bring me this 100% uptime crazy talk. Many of google's are lower than that 99.999% as well.
I'd trust maybe like 1-2 of you here to do it well enough to understand it.
Everyone else that I've seen use meshes of microservices has delivered nothing but pain.
They are great on paper, but if your tools are not up to the task, you and/or the poor sucker left running ops for everything are absolutely going to hate how it turns out.
edit: So at a bare min you need great logging, monitoring, and tracing. But everyone's logs suck. The monitoring/alarming, even it even exists, is just checking for things you expect or have already happened once.
And I've never seen a good tracing setup.
This stuff isn't easy.
The additional monitoring and service management are just additional points of failure.
It is a fucking lot of work.
But when it works, it's pretty slick.
But a giant monolith can work well for a long time. Instagram is still a giant monolithic Django site, for the most part, based on a recent (ish? not sure if it was a 2019 or 2018 thing) talk they gave about their deployment/ops stuff.
I've been doing some Python tutorials and I've run into a snag that seems very simply but I've not been able to figure out. The tutorial seems to jump between Python 2 and 3 without warning so it might be related. I also probably need to seek out a more consistent tutorial.
Regardless, anyone have any clue why this class would give me a "TabError: inconsistent use of tabs and spaces in indentation" on the name_pieces line ?
class User: def __init__(self, full_name, birthday): self.name = full_name self.birthday = birthday name_pieces = full_name.split(" ") self.first_name = name_pieces[0] self.last_name = name_pieces[-1]All the tabs are 2 spaces each and the editor (Brackets) doesn't flag any of the indents. I'm missing something obvious I'm sure but I don't realize what =p.
EDIT - ugh...been messing with this a while and as SOON as I post it, I figured it out. Apparently my bottom part only needed one indention.
Glad we could be your rubber duck!
Programmer hot tip: buy an actual, physical rubber duck. Keep it on your desk. Next time you're stuck on a problem...explain the problem to the duck. Out loud. Ignore all of the crazy looks your coworkers give you.
Half of the time, this will bring you closer to the answer than you were before.
https://en.wikipedia.org/wiki/Rubber_duck_debugging
I'm not joking, I have a rubber duck on my desk.
I just have never really seen it work unless it's a huge company that can devote the resources to manage it (amazon/google).
Every time someone tries to get fancy with these little things (I'm mostly talking about situations that move away from single points of failure, not really necessarily microservices) it's always just dozens of points of failure that happen to all get hit by the same bug and whoops away you go and now you're spending 3/4 of a day fixing the issue that would've been as simple as "spin up another vm, restore the backup from a few hours ago" that would take all of 30 minutes.
(I mean that it's a clown duck, not a clown in place of a duck)
I just spent the day getting the project set up in Azure Devops, and I have working builds now! The buzzwords collection on my resume grew three sizes today!