The new forums will be named Coin Return (based on the most recent vote)! You can check on the status and timeline of the transition to the new forums here.
The Guiding Principles and New Rules document is now in effect.
[Programming] Reinventing equality, one language at a time
This is the PA programming thread, home to programmers everywhere. Here we talk about cats, ponies, synergised high-efficiency software cloud platforms, and occasionally programming.
PADev.net is a project started up by this thread to support PA developers. A discussion about shared hosting turns into an idea to have hosting and a community to support those working on hobby programs and web services and what not.
Some things require a dedicated VPS but the bar of entry isn't that low. The cost is not extravagant but the know-how required to manage one is daunting for many. PAdev.net provides a share of hosting and support, a $5 monthly fee nets you a shell account on the hub server and the expertise of your peers.
Community members looking to help out can request an account for the website, where all members can create and maintain guides and share project updates. There is no cost to have a community account, just contact an administrator. Also available are you@padev.net email accounts or forwarders.
Python - One of the most popular scripting languages around, Python is the go-to language for...just about everything, actually. Its high scalability and widespread support through third-party libraries make it useful for many applications, from simple five-minute jobs to complex Web servers. Python can also be embedded or bound into lower-level languages such as C, letting you write performance-critical code as well as providing access to libraries written for those languages.
Ruby - Another popular scripting language, Ruby is best known as the basis for Ruby on Rails, a framework for Web development that competes with its Python equivalent, Django. While not quite as famous as the juggernaut that is Python, Ruby is nonetheless prolific - modern versions of RPG Maker use it as their scripting language, for instance. And, of course, it can be embedded into other applications, much like Python.
Lua - Yet another scripting language. Lua is built on two principles: simplicity and minimalism. Designed with embedding in mind, Lua's overhead is tiny (it can be counted in hundreds of kilobytes at most) and the language itself only provides a handful of constructs...until you master the black arts of tables and metatables. Those who have done so claim to have seen the face of God; these claims are as yet unsubstantiated. What we do know, however is that in recent years Lua has garnered much attention thanks to LuaJIT, a project that provides not only a just-in-time compiler for Lua (granting a massive speed boost - almost C-like performance, in fact), but also an FFI (foreign function interface) library, allowing Lua scripts to directly call C functions without the hassle of writing boilerplate C bindings.
C and C++ - The Ones Who Came Before. C and C++ are old, clunky, archaic...and the most popular languages in existence. C was conceived in an era when memory was limited and programs were generally written in non-portable assembly languages. The underlying concept (which persists to this day) was that a compiler would take C code, turn it into assembly, and then turn that into an executable or library. Thus, programmers now only had to port most of the codebase instead of all of it! (the situation has improved considerably since then, of course - these days, only the really low-level stuff has a tough time being cross-platform). C++ came some time later, and shook things up significantly with the concepts of classes and generics. Modern C++ also includes the Standard Template Library, or STL, which provides all sorts of useful functionality to make life almost painless. These days most people will learn something simpler like Python before these guys, but make no mistake - everything from your operating system to your favourite video games to your microwave can trace its roots back to one of the two.
C# - The poster boy for Microsoft's .NET Framework, C# is a JIT-compiled language modelled after C++, but without any of the associated pain. Though initially developed as a robust alternative for Windows development, Microsoft makes the specifications available at no cost, which has led to other implementations popping up - the most well-known being the cross-platform Mono. The main draw of C# is its ease of use: with a ridiculous number of APIs available by default and the ability to call into native code, you'll have a tough time finding something higher-level that you can't do in C#. The fact that Visual Studio, one of the best IDEs around, also supports C#, is just icing on the cake.
Android - Directly responsible for bringing this OP into the 21st century, Android is a pseudo-operating system framework which runs on top of Linux. Android is primarily written in Java, though it also combines elements of C, C++ and sometimes even Python, and is designed for mobile devices. The Android application framework represents an almost entirely event-driven paradigm written primarily in Java which heavily utilises usage of background services, model-view-controller, and sub-classing as a method of changing behaviour of standard pieces. Also, it runs on your phone!
The real best part of working with Apple devices, is they ARE giving you a MBP.
We're getting the Dells that are the Windows equivalent of the high-end 15" Macbook Pro (except for the battery life). Frankly, the MBPs wouldn't work as well for us due to lack of vendor support. And the extra lightning connectors are great, but that still means carrying around a dock. I need my USB.
Bare minimum: 1 direct USB connection for JTAG, two more for UART 1 and mouse. Nice to have: UART 2 + a second direct connection to external test instruments. 3 ports gives me two direct connections + a tiny USB hub I can shove in my pocket when I need to hit the lab or go traveling.
Portability is a secondary concern. I've got work to do. And I'd like to limit the number of extra doo-dads I need since I'll be carrying a pile of the damned things around already.
What kind of test equipment can't handle going through a USB hub? Or is it because it's performance sensitive?
In my lab I have a USB hub servicing JTAG/ICE, Bench-Top Logic Analyzer, & 2xUART (FTDI UART<->USB) that go to a test fixture. They all play really well together.
thatassemblyguyJanitor of Technical Debt.Registered Userregular
Yeah, that'd be a really unfortunate driver to not handle enumerating devices/routing through a USB hub (honestly, I haven't looked at the USB protocol in a looong time, so I'm not sure what kind of transformative/routing operations are going on under the hood of a hub that would be problematic or difficult to support for a host system driver..).
The real best part of working with Apple devices, is they ARE giving you a MBP.
We're getting the Dells that are the Windows equivalent of the high-end 15" Macbook Pro (except for the battery life). Frankly, the MBPs wouldn't work as well for us due to lack of vendor support. And the extra lightning connectors are great, but that still means carrying around a dock. I need my USB.
Bare minimum: 1 direct USB connection for JTAG, two more for UART 1 and mouse. Nice to have: UART 2 + a second direct connection to external test instruments. 3 ports gives me two direct connections + a tiny USB hub I can shove in my pocket when I need to hit the lab or go traveling.
Portability is a secondary concern. I've got work to do. And I'd like to limit the number of extra doo-dads I need since I'll be carrying a pile of the damned things around already.
What kind of test equipment can't handle going through a USB hub? Or is it because it's performance sensitive?
In my lab I have a USB hub servicing JTAG/ICE, Bench-Top Logic Analyzer, & 2xUART (FTDI UART<->USB) that go to a test fixture. They all play really well together.
Lauterbach TRACE32 gets unhappy when I use a hub. As does the spectrum analyzer software (this one is probably doing something funny; we need exemptions for it in the malware protection suite). The Xilinx drivers for the Digilent debuggers also seem to get unpredictably unhappy (turning on bugchecking for drivers consistently crashes the OS when trying to program, so it's also obviously doing something funny).
How old is all y'alls lauterbach license? That's our JTAG/ICE vendor too, and we have no problems with TRACE32 through a USB hub. Though we did just update a lot of our cables/licenses to use recent (like 2016ish) software..
e: The other stuff, the other stuff, we'll yea.. embedded
The real best part of working with Apple devices, is they ARE giving you a MBP.
We're getting the Dells that are the Windows equivalent of the high-end 15" Macbook Pro (except for the battery life). Frankly, the MBPs wouldn't work as well for us due to lack of vendor support. And the extra lightning connectors are great, but that still means carrying around a dock. I need my USB.
Bare minimum: 1 direct USB connection for JTAG, two more for UART 1 and mouse. Nice to have: UART 2 + a second direct connection to external test instruments. 3 ports gives me two direct connections + a tiny USB hub I can shove in my pocket when I need to hit the lab or go traveling.
Portability is a secondary concern. I've got work to do. And I'd like to limit the number of extra doo-dads I need since I'll be carrying a pile of the damned things around already.
Yeah, you gotta use what your needs dictate, for sure. Me, the most complex thing I need to plug in is a dongle w/ HDMI, USBnot-C, etc, and I could probably lose that if I wasn't a monitor whore. Xcode, though, no getting away from that mistress.
Really a convert to USB-C though, that is just pleasant.
The other stuff, the other stuff, we'll yea.. embedded
I love how we can just say that word, and just sort of nod our heads and understand.
Edit: Honest to god, most of us try to deliver really solid firmware to go along with the hardware, but sometimes... there are just developers who don't quite get it.
The other stuff, the other stuff, we'll yea.. embedded
I love how we can just say that word, and just sort of nod our heads and understand.
Edit: Honest to god, most of us try to deliver really solid firmware to go along with the hardware, but sometimes... there are just developers who don't quite get it.
My "favourite" story was an ex-colleague who really pushed back on the idea of being able to do a intervention-free remote upgrade for a device that we were working on.
He really really wanted the user to push a button soon after power-up to begin the upgrade process, because that would bring it in line with the other products that he was working on.
Did I forget to mention that the device is frequently placed in inaccessible locations (several meters/a few dozen feet up above ground level), and is water-tight with no buttons (so adding a button needs input from the mechanical team to create a water-proof seal that meets the IP rating around the button just for this upgrade process, input from the factory team to retool the production line to support adding this button to the packaging, and also input from the supply chain team to actually source any and all materials necessary for this).
We had too many meetings before I ended up being tired of his antics, and escalated to his team lead and basically said, "Hey guys, I understand you want things to be consistent, but... you can't in this case. Soz."
KakodaimonosCode fondlerHelping the 1% get richerRegistered Userregular
I am not sad to be done with embedded stuff. But I do occasionally miss the days of setting test hardware on fire. But then my bugs won't actually kill people now so that's a bonus.
I am not sad to be done with embedded stuff. But I do occasionally miss the days of setting test hardware on fire. But then my bugs won't actually kill people now so that's a bonus.
There's some excitement though, of working on embedded stuff that is so vitally important. Pressure, for sure, but also exciting!
0
KakodaimonosCode fondlerHelping the 1% get richerRegistered Userregular
It was usually more like, "Who fucking checked something in after we started the whole test process? Now we have to start it all over again."
I am not sad to be done with embedded stuff. But I do occasionally miss the days of setting test hardware on fire. But then my bugs won't actually kill people now so that's a bonus.
There's some excitement though, of working on embedded stuff that is so vitally important. Pressure, for sure, but also exciting!
I'm enough of a drunkard already. At least if I fuck up horribly now it's only money.
It was usually more like, "Who fucking checked something in after we started the whole test process? Now we have to start it all over again."
I started just as the new age of version control was ushered in, so I didn't have to deal with CVS (or other systems wtf clearcase) for very long. I'm glad we have a few workable options these days because we can adopt a lot of the CI/CD flow. Meaning that for the most part our latest stuff is tested to a high level of internal confidence, and the only testing that "kicks-off" is for qualifying the firmware for a particular standard, or customer specified level.
Now, if our qualification testing these days is kicked off by program management without some critical check-in, that's their mistake, and not the fault/responsibility of the development team. Essentially, we're no longer in the mode where we have to "freeze" code baselines while testing is happening, and it. is. great.
I would just like to state how overwhelmingly wonderful it is working with full unit and integration tests coming from a company that did none whatsoever.
Apparently since I left my last place they have actually started doing them on a few projects. Though only because the customer forced them to in contract. Remembering back to the code-base I'm pretty sure it was going to be a herculean effort to actually get things into a state where they could be actually be tested.
We've been dealing with this incredibly elusive bug. We're storing a small amount of encrypted data, which are used to make API calls on behalf of clients. It contains API keys, passwords, etc., stored by our users.
Sometimes, every so often, we un-encrypt a client's secret in order to make an API call, but the secret is just nonsense. Something's gone wrong with the encryption process.
This happens occasionally, perhaps 1 out of every 50 times. And so far, no one has been able to reliably reproduce the error.
But today... today I was able to figure it out. It's a long long story as to why, but the TLDR is this:
Just don't use datetime columns as your salt. Just don't. Create a column for just the salt, call it secret_salt, store it as a varchar field.
Also, don't use time as a salt. It's a real bad idea because everyone knows what time it is.
The key for the encrypted text is composed of two parts: (1) the created date of the record, in seconds-since-epoch, in string representation, and (2) a secret that is only accessible on the machines running the application. The two keys are concatenated, and then the combined key is used to encrypt the text. It is not a one-way hash, by the way, as we need to recreate the original text.
I'm looking at some documentation that was made around the time this was coded, and it looks like the justification was that we wanted a unique key per record. This way if someone was looking at the database they can't tell if two records are storing the same value (i.e., two records have the exact same secret information). Furthermore, it helps against rainbow tables.
I admit 100% ignorance when it comes to cryptography. I've been scouring the internet for the last couple hours trying to find some agreed-upon best practices for this sort of thing, and having some bit of trouble. There's tons of good stuff out there for storing passwords in a one-way hash, but that's not the case here.
Edit: Yeah, doing more reading, and re-reading the aforementioned documentation, it looks like the original authors of this code were confusing best practices for one-way-hashing with best practices for symmetric encryption. I'm not sure if they knew what they were doing any more than I do...
... you guys have documentation on why a decision was made?
I actually respect the original devs for that.
"Yeah, we highly suspect that we're about to fuck up, so here's some documentation explaining our decision so that maybe someone better than us in the future can help."
Yeah, for sure, the original developers were quite solid.
And in this case, they had a real shitty task. The more I think about it, the more I appreciate how awful this situation is.
We have to work with APIs with terrible authorization schemes. We need to store secret information for use in archaic and not-recommended authorization schemes. It would be a complete disaster if some of this information got out. Encrypting that information seems like a totally rational thing to do. But as I'm reading about different encryption schemes, I'm appreciating how crazy you can get with this sort of thing. There's no "best" practice, because what's best entirely depends upon your situation, how much time and money you're willing to spend, how important the security of the information is, and so on.
If it's just an identifier and it's not being used to protect information it's probably not too terrible security wise, but there's some pretty obvious problems if it's supposed to be a unique identifier and you're creating more than one record a second. Good on you though, for finding where a bug like that is coming from.
And speaking of taking into consideration time and money... considering there are just three programmers at my company including me, I've just replaced that created-date salt with a 16-byte randomly-generated bytestring. Old records will be grandfathered in.
Hopefully that's the best decision given I probably have maybe a day to work on this before getting to like, features that actually make us money. :P
Using the time as a salt is not necessarily horrible, with the caveat that you shouldn't be able to generate identical salts. Typically they're not even all that secret, they exist solely to prevent rainbow tables, increase the required effort and simply need to be unique
Since the salt space is still >> the actual number of accounts (if you are using sub-second anyway) it's also not worth trying to precalculate it
And of course there are much better salt techniques
Wut wut. Don't suggest using UUID's as salts guys!
I'm not suggesting he use UUIDs as salts. He's not actually using the data as a salt but as a unique identifier in the database (if I understood what he said earlier correctly).
Posts
What kind of test equipment can't handle going through a USB hub? Or is it because it's performance sensitive?
In my lab I have a USB hub servicing JTAG/ICE, Bench-Top Logic Analyzer, & 2xUART (FTDI UART<->USB) that go to a test fixture. They all play really well together.
But having said that, JTAG isn't usually *that* fast, and neither are UARTs (well, I'm assuming RS232 or RS485 - they don't tend to signal that fast).
Edit: Or it's a custom device with a strange custom driver that goes weird when a hub is placed inbetween, and he can't fix the driver?
I sympathise.
How old is all y'alls lauterbach license? That's our JTAG/ICE vendor too, and we have no problems with TRACE32 through a USB hub. Though we did just update a lot of our cables/licenses to use recent (like 2016ish) software..
e: The other stuff, the other stuff, we'll yea.. embedded
Yeah, you gotta use what your needs dictate, for sure. Me, the most complex thing I need to plug in is a dongle w/ HDMI, USBnot-C, etc, and I could probably lose that if I wasn't a monitor whore. Xcode, though, no getting away from that mistress.
Really a convert to USB-C though, that is just pleasant.
I love how we can just say that word, and just sort of nod our heads and understand.
Edit: Honest to god, most of us try to deliver really solid firmware to go along with the hardware, but sometimes... there are just developers who don't quite get it.
Soon to be: Kotlin :rotate: for some of us who are only using Java :rotate: for android.
I call them Electrical Engineering majors.
He really really wanted the user to push a button soon after power-up to begin the upgrade process, because that would bring it in line with the other products that he was working on.
Did I forget to mention that the device is frequently placed in inaccessible locations (several meters/a few dozen feet up above ground level), and is water-tight with no buttons (so adding a button needs input from the mechanical team to create a water-proof seal that meets the IP rating around the button just for this upgrade process, input from the factory team to retool the production line to support adding this button to the packaging, and also input from the supply chain team to actually source any and all materials necessary for this).
We had too many meetings before I ended up being tired of his antics, and escalated to his team lead and basically said, "Hey guys, I understand you want things to be consistent, but... you can't in this case. Soz."
I'm enough of a drunkard already. At least if I fuck up horribly now it's only money.
I started just as the new age of version control was ushered in, so I didn't have to deal with CVS (or other systems wtf clearcase) for very long. I'm glad we have a few workable options these days because we can adopt a lot of the CI/CD flow. Meaning that for the most part our latest stuff is tested to a high level of internal confidence, and the only testing that "kicks-off" is for qualifying the firmware for a particular standard, or customer specified level.
Now, if our qualification testing these days is kicked off by program management without some critical check-in, that's their mistake, and not the fault/responsibility of the development team. Essentially, we're no longer in the mode where we have to "freeze" code baselines while testing is happening, and it. is. great.
Apparently since I left my last place they have actually started doing them on a few projects. Though only because the customer forced them to in contract. Remembering back to the code-base I'm pretty sure it was going to be a herculean effort to actually get things into a state where they could be actually be tested.
http://steamcommunity.com/id/pablocampy
We've been dealing with this incredibly elusive bug. We're storing a small amount of encrypted data, which are used to make API calls on behalf of clients. It contains API keys, passwords, etc., stored by our users.
Sometimes, every so often, we un-encrypt a client's secret in order to make an API call, but the secret is just nonsense. Something's gone wrong with the encryption process.
This happens occasionally, perhaps 1 out of every 50 times. And so far, no one has been able to reliably reproduce the error.
But today... today I was able to figure it out. It's a long long story as to why, but the TLDR is this:
Just don't use datetime columns as your salt. Just don't. Create a column for just the salt, call it secret_salt, store it as a varchar field.
Oh yeaaaaah, I believe that it is ... business time.
The key for the encrypted text is composed of two parts: (1) the created date of the record, in seconds-since-epoch, in string representation, and (2) a secret that is only accessible on the machines running the application. The two keys are concatenated, and then the combined key is used to encrypt the text. It is not a one-way hash, by the way, as we need to recreate the original text.
I'm looking at some documentation that was made around the time this was coded, and it looks like the justification was that we wanted a unique key per record. This way if someone was looking at the database they can't tell if two records are storing the same value (i.e., two records have the exact same secret information). Furthermore, it helps against rainbow tables.
I admit 100% ignorance when it comes to cryptography. I've been scouring the internet for the last couple hours trying to find some agreed-upon best practices for this sort of thing, and having some bit of trouble. There's tons of good stuff out there for storing passwords in a one-way hash, but that's not the case here.
Edit: Yeah, doing more reading, and re-reading the aforementioned documentation, it looks like the original authors of this code were confusing best practices for one-way-hashing with best practices for symmetric encryption. I'm not sure if they knew what they were doing any more than I do...
I actually respect the original devs for that.
"Yeah, we highly suspect that we're about to fuck up, so here's some documentation explaining our decision so that maybe someone better than us in the future can help."
And in this case, they had a real shitty task. The more I think about it, the more I appreciate how awful this situation is.
We have to work with APIs with terrible authorization schemes. We need to store secret information for use in archaic and not-recommended authorization schemes. It would be a complete disaster if some of this information got out. Encrypting that information seems like a totally rational thing to do. But as I'm reading about different encryption schemes, I'm appreciating how crazy you can get with this sort of thing. There's no "best" practice, because what's best entirely depends upon your situation, how much time and money you're willing to spend, how important the security of the information is, and so on.
Hopefully that's the best decision given I probably have maybe a day to work on this before getting to like, features that actually make us money. :P
Whatever language you're using probably has a method of generating them for you.
Since the salt space is still >> the actual number of accounts (if you are using sub-second anyway) it's also not worth trying to precalculate it
And of course there are much better salt techniques
http://steamcommunity.com/id/pablocampy
I'm not suggesting he use UUIDs as salts. He's not actually using the data as a salt but as a unique identifier in the database (if I understood what he said earlier correctly).