So even if it's C++ (or whatever we're using), I cannot use a variable to set an array size unless I allocate memory? I can't do this:
int set[<variable>];
Correct that is not allowed by the C++ spec, but it is allowed by C99 ( see VLA ). Now gcc by default enables some C99 features when compiling C++ code, one of which is VLA.
So your best options when writing C++ code that you want to make sure matches spec is to at least use "-std=c++98 -pedantic"
If I only had to deal with XML, yeah. But we also send data to set top boxes (Roku for example) where an errant character isn't just gracefully ignored - it crashes the damn app completely.
Shouldn't that be handled by whatever app is sending the data though?
It's going to sound like I'm making excuses, but we have a single API all the devices use, and Roku was a device we added later on, and since the system was working perfectly otherwise, and I was adding major modules of code to the system constantly, I just never got to doing something like this (the system doing the substitution for something Roku would understand).
I hate working on front-end web stuff. It is not my comfort-zone.
I also hate it when my boss thinks that we can continue our "fly by the seat of our pants" strategy that we've used since the 80's with shit like storing electronic-signatures for consenting to receive tax-documents online, etc etc.
That goes over things I'm not sure I understand, but some of it I remember, it is C++98 I guess. Well, what I learned. I'm sure there are people here who know way more on what you need for dynamic array though.
So I spent part of last week using libnfs to write a small NFS client for this embedded thing I'm working on.
It would have been nice if libnfs came with any documentation whatsoever, but hey, that's an open source library for ya.
After I get it working, and test it against some random NFSv3 server on our development LAN, I go to test it with the craptastic legacy embedded system that it needs to work with.
It doesn't work, because the craptastic legacy system is using NFSv2. libnfs has NFSv2 support, but you need to use a completely different API for it, and I didn't realize this until now because of the aforementioned complete lack of documentation. So I need to rewrite half my damn client. Ugh.
This is a novice question I guess but maybe someone can enlighten me. This is a Swift question.
I know I need to work with Integers larger than 2.15B (I need up to about 84B). So the correct type for me would be Int64. Or just Int. According to Apple:
Int
In most cases, you don’t need to pick a specific size of integer to use in your code. Swift provides an additional integer type, Int, which has the same size as the current platform’s native word size:
On a 32-bit platform, Int is the same size as Int32.
On a 64-bit platform, Int is the same size as Int64.
Unless you need to work with a specific size of integer, always use Int for integer values in your code. This aids code consistency and interoperability. Even on 32-bit platforms, Int can store any value between -2,147,483,648 and 2,147,483,647, and is large enough for many integer ranges.
What do they mean by "current platform"? If I use an Int64 (or just Int, knowing that values will exceed 2.15B), does that mean my app will be incompatible with iDevices running 32-bit versions of iOS?
Basically, if it's a 32 bit platform, it'll be int32, if it's a 64 bit platform it'll be int64. Never assume anything about how many bytes a datatype can store. If you need a specific size, specify it on the code level.
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
The B stands for billion. A signed 32-bit integer can hold values from -(2^31) through (2^31)-1, an unsigned 32-bit integer can hold values from 0 through (2^32)-1.
Basically, even in most languages "int" will switch based on what compiler you use. It could be 8, 16, 32, 64, ++ bytes.
But int32 is always an int32 and an int64 is always an int64.
My question is more - if I compile as Int64, does that make the program incompatible (at runtime, not build time) on 32-bit OSes. I'm assuming yes it will be incompatible, but I want to be clear.
I am really curious in what you are representing that you require values larger than 2.1B.
Money would be my first guess.
More or less, yeah.
You can get around this by using int16s and doing 4-5 subdivisions.
Like you'd see in RPGs : copper, silver, gold, platinum, unobtanium
That is, if you're not making a "real world dollars" program.
Actually, money is only one aspect. I've decided to go the crazy Disgaea route with stats. So my EXP table effectively reaches toward 100B.
I'm currently trying to tackle how best to generate random numbers with a min/max range beyond 4.3B, since arc4random_uniform() seems to be the preferred method, even in Swift, but only accepts UInt32.
I don't think the RPGs actually use that setup, I'm pretty sure internally they just have # of coppers and then convert for display. That's how WoW did it, and for a very long time they had a 214748g 36s 47c limit. Properly doing the math split across multiple values is much harder. You're writing a bigint library at that point
Basically, even in most languages "int" will switch based on what compiler you use. It could be 8, 16, 32, 64, ++ bytes.
But int32 is always an int32 and an int64 is always an int64.
My question is more - if I compile as Int64, does that make the program incompatible (at runtime, not build time) on 32-bit OSes. I'm assuming yes it will be incompatible, but I want to be clear.
What I'm basically talking about is:
let myInt:UInt64 = 2000000000;
versus
let myInt = 2000000000;
or
let myInt:UInt = 2000000000;
I think that's how swift does variables, can't remember though, it's been a few months since I've looked at the book.
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
I don't think the RPGs actually use that setup, I'm pretty sure internally they just have # of coppers and then convert for display. That's how WoW did it, and for a very long time they had a 214748g 36s 47c limit. Properly doing the math split across multiple values is much harder. You're writing a bigint library at that point
That's pretty lazy!
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
I am really curious in what you are representing that you require values larger than 2.1B.
Money would be my first guess.
More or less, yeah.
You can get around this by using int16s and doing 4-5 subdivisions.
Like you'd see in RPGs : copper, silver, gold, platinum, unobtanium
That is, if you're not making a "real world dollars" program.
Actually, money is only one aspect. I've decided to go the crazy Disgaea route with stats. So my EXP table effectively reaches toward 100B.
I'm currently trying to tackle how best to generate random numbers with a min/max range beyond 4.3B, since arc4random_uniform() seems to be the preferred method, even in Swift, but only accepts UInt32.
custom class of two UInt32 that represent the high / low of a UInt64.
Basically, even in most languages "int" will switch based on what compiler you use. It could be 8, 16, 32, 64, ++ bytes.
But int32 is always an int32 and an int64 is always an int64.
My question is more - if I compile as Int64, does that make the program incompatible (at runtime, not build time) on 32-bit OSes. I'm assuming yes it will be incompatible, but I want to be clear.
Nope. You can do 64-bit math on any CPU, even 8-bit ones. It's just slower to do so - but you will need to do separate builds since you can't run a 64-bit binary on a 32-bit system, so the compiler will take care of everything
Basically, even in most languages "int" will switch based on what compiler you use. It could be 8, 16, 32, 64, ++ bytes.
But int32 is always an int32 and an int64 is always an int64.
My question is more - if I compile as Int64, does that make the program incompatible (at runtime, not build time) on 32-bit OSes. I'm assuming yes it will be incompatible, but I want to be clear.
What I'm basically talking about is:
let myInt:UInt64 = 2000000000;
versus
let myInt = 2000000000;
or
let myInt:UInt = 2000000000;
I think that's how swift does variables, can't remember though, it's been a few months since I've looked at the book.
Sorry maybe I'm not wording the question the right way...
What I gather from what I'm reading at Apple's site is this:
var integerDude: Int
var integerDude64: Int64
Are identical because I'm compiling this on a 64-bit machine. Both variables are declared as Int64. But maybe I'm wrong.
But even that isn't really my question. My question is (and yes, this is probably exceedingly newbish): If I use 64-bit integers, and compile my program, and sell it to someone using an iPod Whatever running a 32-bit version of iOS, is the game going to run or not. How does my use of 64-bit integers relate to the iOS running the compiled program?
Basically, even in most languages "int" will switch based on what compiler you use. It could be 8, 16, 32, 64, ++ bytes.
But int32 is always an int32 and an int64 is always an int64.
My question is more - if I compile as Int64, does that make the program incompatible (at runtime, not build time) on 32-bit OSes. I'm assuming yes it will be incompatible, but I want to be clear.
Nope. You can do 64-bit math on any CPU, even 8-bit ones. It's just slower to do so - but you will need to do separate builds since you can't run a 64-bit binary on a 32-bit system, so the compiler will take care of everything
No you're right. But, if you compiled that as a 32 bit binary, integerDude64 is still a 64 bit integer. But, integerDude becomes 32 bit. Int is basically shorthand for whatever method the compiler is using at the time.
It's all well and fine to use that if you don't care about the range and are likely to never hit it. If you know you're going to need billions of numbers, specify it as Int64 like your second one.
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
Posts
So even if it's C++ (or whatever we're using), I cannot use a variable to set an array size unless I allocate memory? I can't do this:
Correct that is not allowed by the C++ spec, but it is allowed by C99 ( see VLA ). Now gcc by default enables some C99 features when compiling C++ code, one of which is VLA.
So your best options when writing C++ code that you want to make sure matches spec is to at least use "-std=c++98 -pedantic"
It's going to sound like I'm making excuses, but we have a single API all the devices use, and Roku was a device we added later on, and since the system was working perfectly otherwise, and I was adding major modules of code to the system constantly, I just never got to doing something like this (the system doing the substitution for something Roku would understand).
Oo\ Ironsizide
I also hate it when my boss thinks that we can continue our "fly by the seat of our pants" strategy that we've used since the 80's with shit like storing electronic-signatures for consenting to receive tax-documents online, etc etc.
It gets better.
It also has < and > in it.
Something tells me I'm going to need to write my own xml reader.
The only proper response to the bolded is to post dis...
That goes over things I'm not sure I understand, but some of it I remember, it is C++98 I guess. Well, what I learned. I'm sure there are people here who know way more on what you need for dynamic array though.
It would have been nice if libnfs came with any documentation whatsoever, but hey, that's an open source library for ya.
After I get it working, and test it against some random NFSv3 server on our development LAN, I go to test it with the craptastic legacy embedded system that it needs to work with.
It doesn't work, because the craptastic legacy system is using NFSv2. libnfs has NFSv2 support, but you need to use a completely different API for it, and I didn't realize this until now because of the aforementioned complete lack of documentation. So I need to rewrite half my damn client. Ugh.
I know I need to work with Integers larger than 2.15B (I need up to about 84B). So the correct type for me would be Int64. Or just Int. According to Apple:
What do they mean by "current platform"? If I use an Int64 (or just Int, knowing that values will exceed 2.15B), does that mean my app will be incompatible with iDevices running 32-bit versions of iOS?
But int32 is always an int32 and an int64 is always an int64.
Money would be my first guess.
What the hell is 2.1B?
http://steamcommunity.com/id/pablocampy
File sizes? Memory addresses?
That's when you just write a data sanitizer to run on the file as a preprocessing step.
Nintendo ID: Incindium
PSN: IncindiumX
Can't!
<> would flag false positives on XML entities wouldn't it?
Approximately 2.1 billion, specifically 2**31-1
Speculatively parse when you encounter stray <>s, then if you hit an error, backup and assume it's text!
The B stands for billion. A signed 32-bit integer can hold values from -(2^31) through (2^31)-1, an unsigned 32-bit integer can hold values from 0 through (2^32)-1.
(2^31)-1 = 2,147,483,647.
Yeah back to "write my own"
Though at this point I think it's going to be "ignore data sets that have this" at this point and use the default xmlreader.
Read, encounter error, skip to next "row" item. Boom roasted.
More or less, yeah.
If its all the one tag, can you just script CDATA around the text?
3DS: 0473-8507-2652
Switch: SW-5185-4991-5118
PSN: AbEntropy
My question is more - if I compile as Int64, does that make the program incompatible (at runtime, not build time) on 32-bit OSes. I'm assuming yes it will be incompatible, but I want to be clear.
You can get around this by using int16s and doing 4-5 subdivisions.
Like you'd see in RPGs : copper, silver, gold, platinum, unobtanium
That is, if you're not making a "real world dollars" program.
Actually, money is only one aspect. I've decided to go the crazy Disgaea route with stats. So my EXP table effectively reaches toward 100B.
I'm currently trying to tackle how best to generate random numbers with a min/max range beyond 4.3B, since arc4random_uniform() seems to be the preferred method, even in Swift, but only accepts UInt32.
What I'm basically talking about is:
versus
or
I think that's how swift does variables, can't remember though, it's been a few months since I've looked at the book.
That's pretty lazy!
custom class of two UInt32 that represent the high / low of a UInt64.
Nope. You can do 64-bit math on any CPU, even 8-bit ones. It's just slower to do so - but you will need to do separate builds since you can't run a 64-bit binary on a 32-bit system, so the compiler will take care of everything
Sorry maybe I'm not wording the question the right way...
What I gather from what I'm reading at Apple's site is this:
Are identical because I'm compiling this on a 64-bit machine. Both variables are declared as Int64. But maybe I'm wrong.
But even that isn't really my question. My question is (and yes, this is probably exceedingly newbish): If I use 64-bit integers, and compile my program, and sell it to someone using an iPod Whatever running a 32-bit version of iOS, is the game going to run or not. How does my use of 64-bit integers relate to the iOS running the compiled program?
OK thanks.
It's all well and fine to use that if you don't care about the range and are likely to never hit it. If you know you're going to need billions of numbers, specify it as Int64 like your second one.