Jump to content

C, C++, C# Programming - what is the sense in this


PB666

Recommended Posts

Because 'A' isn't a string. It's a character. "A" is a string, which contains two characters. 'A' and '\0'. The trailing '\0', which happens to be ASCII 0, is there to indicate that the string has ended.

Here's a demo for you that might help.

Note that you should never, ever test passwords like that, and not just because one could buffer-overflow this code. But I don't think you're worried about security in your code. This approach is fine if you are just parsing a file, or something. And you can use sscanf and fscanf in exactly the same way.

There are also several variations of that strcmp functions, including ones that ignore case differences and ones that only compare first n characters. Look up the docs. Keep in mind that strcmp will return false if strings are the same. It's counter-intuitive but there are some reasons for it that probably aren't relevant to what you are doing.

P.S. I'm using Hungarian Notation here. The "pc" on variables stands for "Pointer to Char". Technically, char[] and char* aren't the same thing, but in this case, they serve the same purpose, so both are labeled with "pc".

This is what actually worked:

public static void Main (string[] args)
{

[INDENT] string rawSequence = "ACTG";
char[] Nt = rawSequence.ToCharArray();
int a = 64, lenRS = rawSequence.Length;
byte[] encNt = new byte[lenRS]; //WinDef.h
for (a = 0; a < lenRS; a++) {[/INDENT]


[INDENT=2] if([B]Nt[a] == 65[/B]) {encNt[a] = 0; Console.Write (" {0}", encNt [0]); continue;};
if(Nt[a] == 67) {encNt[a] = 1; Console.Write (" {0}", encNt [1]); continue;};
if(Nt[a] == 71) {encNt[a] = 2; Console.Write (" {0}", encNt [2]); continue;};
if(Nt[a] == 84) {encNt[a] = 3; Console.Write (" {0}", encNt [3]); continue;};
encNt [a] = 255;
}
[/INDENT]

[INDENT] Console.WriteLine (rawSequence);
[/INDENT]


}

bolded was the problem statement. I really thought I was going to end up doing a conversion nested into an if statement.

Edited by PB666
Removed repetitive code snippet
Link to comment
Share on other sites

Oh, you meant comparing individual characters IN the string. Sure. That works. Like I said, C-style string it's just an array of chars. I think, part of your problem is the fact that you're using a "string" type at all. There is absolutely no reason to bother with the overhead that involves.

Also, keep in mind that printing stuff to console isn't cheap, time-wise. It's good for debugging, but don't forget to disable it when you're actually trying to process billions of symbols.

Finally, you can actually use 'A' in place of 65. Exactly like that, with single quotes. Means exactly the same thing, but would make the code a bit easier to read.

Link to comment
Share on other sites

Oh, you meant comparing individual characters IN the string. Sure. That works. Like I said, C-style string it's just an array of chars. I think, part of your problem is the fact that you're using a "string" type at all.

No joke, but this stuff comes in as a text files with the occasional or frequent delimiter. In a way the ASCII stuff is good for confirmation, in a way its bad because I cannot go directly into processing, and every string that needs high level processing will have to be processed from string into quads.

There is absolutely no reason to bother with the overhead that involves.
You could say the same thing about chickens with regard to eggs.

Also, keep in mind that printing stuff to console isn't cheap, time-wise. It's good for debugging, but don't forget to disable it when you're actually trying to process billions of symbols.

Its one of the most expensive things you can do in VB, the printing is not to bad, the line feeding is like temporal black hole. With most modern video cards the character writing is instanteous, but when you scroll in a child window most of the screen needs to be reformulated. I used to cramp the output into lines using For L1 = 1 to Whatever: Debug.Print VarX;" ";: IF L1 MOD 100 = 0 Then Debug.Print: Next L1 '100 bits of output with spaces on a line

Finally, you can actually use 'A' in place of 65. Exactly like that, with single quotes. Means exactly the same thing, but would make the code a bit easier to read.

After I read your previous message, I looked ' ' versus " " up , better yet I can define constants makes the code look even better.

Link to comment
Share on other sites

Sure, constants work.

As for reading text, you can read it directly into a char array from file using the fread() command. You will have to do your own line/delimiter parsing, though, unless new lines and white spaces are the only delimiters in the file, in which case, fscanf is your friend.

FILE f;
char pcBuffer[1024]; // This is fine, so long as you never expect to read more than 1023 characters at a time.

f = fopen("data.txt", "rb"); // Or whatever your file is called.
while(fscanf("%s", pcBuffer))
{
// Process pcBuffer
}
fclose(f);

Link to comment
Share on other sites

Sure, constants work.

As for reading text, you can read it directly into a char array from file using the fread() command. You will have to do your own line/delimiter parsing, though, unless new lines and white spaces are the only delimiters in the file, in which case, fscanf is your friend.

In the old days 64k was the limit on a text file, now the sky is the limit, though I wonder what the string limit is for C++, cause I need to pack in a 250M. Thanks for the snippet.

Link to comment
Share on other sites

In the old days 64k was the limit on a text file, now the sky is the limit, though I wonder what the string limit is for C++, cause I need to pack in a 250M. Thanks for the snippet.

Well, for std::string, that's an easy question to answer: http://www.cplusplus.com/reference/string/string/max_size/

For a plain-old char array it's system dependent. Statically allocated char array will run into stack size limits really fast (the stack's not very big; although you can change how much memory is allocated to it), dynamically allocated with new...well...again, system dependent. The language itself puts no explicit constraints on the permitted sizes.

Link to comment
Share on other sites

Oh, you meant comparing individual characters IN the string. Sure. That works. Like I said, C-style string it's just an array of chars. I think, part of your problem is the fact that you're using a "string" type at all. There is absolutely no reason to bother with the overhead that involves.

Also, keep in mind that printing stuff to console isn't cheap, time-wise. It's good for debugging, but don't forget to disable it when you're actually trying to process billions of symbols.

Finally, you can actually use 'A' in place of 65. Exactly like that, with single quotes. Means exactly the same thing, but would make the code a bit easier to read.

Well if you are going to write it 20 times just #define A 65 and skip the quotation marks, even cleaner.

Link to comment
Share on other sites

Here's a challenge.

What if you want flip the order of bits in a register and also not them

So for instance if you have

10110001 -- how to make it go to ----> 01110010

I have a rough idea how i might do it in assembly, how to do it slowly in VB, no idea how to do it efficiently in C.

Forgot to add its a 64 bit unsigned integer.

Edited by PB666
Link to comment
Share on other sites

This is my VB code it creates a nice table. (this is excel, little active X button to create a code block and the immediate window)

On the fly I would have to determine the bits then use those to append the value of the target.

Private Sub CommandButton1_Click()
Dim L1, L2, L3, L4, L5, L6, L7, L8
Dim Zbytes(255) As Byte
Count = 0
For L1 = 0 To 1
For L2 = 0 To 1
For L3 = 0 To 1
For L4 = 0 To 1
For L5 = 0 To 1
For L6 = 0 To 1
For L7 = 0 To 1
For L8 = 0 To 1
target = 255 - L1 - L2 * 2 - L3 * 4 - L4 * 8 - L5 * 16 - L6 * 32 - L7 * 64 - L8 * 128
Zbytes(Count) = target
Debug.Print Count; " "; target,
Count = Count + 1
If Count Mod 8 = 0 Then Debug.Print
Next L8
Next L7
Next L6
Next L5
Next L4
Next L3
Next L2
Next L1
Debug.Print
End Sub

Number combination is relevant but order is irrelevant in the pairs.
0 255 1 127 2 191 3 63 4 223 5 95 6 159 7 31
8 239 9 111 10 175 11 47 12 207 13 79 14 143 15 15
16 247 17 119 18 183 19 55 20 215 21 87 22 151 23 23
24 231 25 103 26 167 27 39 28 199 29 71 30 135 31 7
32 251 33 123 34 187 35 59 36 219 37 91 38 155 39 27
40 235 41 107 42 171 43 43 44 203 45 75 46 139 47 11
48 243 49 115 50 179 51 51 52 211 53 83 54 147 55 19
56 227 57 99 58 163 59 35 60 195 61 67 62 131 63 3
64 253 65 125 66 189 67 61 68 221 69 93 70 157 71 29
72 237 73 109 74 173 75 45 76 205 77 77 78 141 79 13
80 245 81 117 82 181 83 53 84 213 85 85 86 149 87 21
88 229 89 101 90 165 91 37 92 197 93 69 94 133 95 5
96 249 97 121 98 185 99 57 100 217 101 89 102 153 103 25
104 233 105 105 106 169 107 41 108 201 109 73 110 137 111 9
112 241 113 113 114 177 115 49 116 209 117 81 118 145 119 17
120 225 121 97 122 161 123 33 124 193 125 65 126 129 127 1
128 254 129 126 130 190 131 62 132 222 133 94 134 158 135 30
136 238 137 110 138 174 139 46 140 206 141 78 142 142 143 14
144 246 145 118 146 182 147 54 148 214 149 86 150 150 151 22
152 230 153 102 154 166 155 38 156 198 157 70 158 134 159 6
160 250 161 122 162 186 163 58 164 218 165 90 166 154 167 26
168 234 169 106 170 170 171 42 172 202 173 74 174 138 175 10
176 242 177 114 178 178 179 50 180 210 181 82 182 146 183 18
184 226 185 98 186 162 187 34 188 194 189 66 190 130 191 2
192 252 193 124 194 188 195 60 196 220 197 92 198 156 199 28
200 236 201 108 202 172 203 44 204 204 205 76 206 140 207 12
208 244 209 116 210 180 211 52 212 212 213 84 214 148 215 20
216 228 217 100 218 164 219 36 220 196 221 68 222 132 223 4
224 248 225 120 226 184 227 56 228 216 229 88 230 152 231 24
232 232 233 104 234 168 235 40 236 200 237 72 238 136 239 8
240 240 241 112 242 176 243 48 244 208 245 80 246 144 247 16
248 224 249 96 250 160 251 32 252 192 253 64 254 128 255 0

I could go to 65536 and do an idiv of to long long into 65536, 65536^2, 65536^3

From here then convert all values with the table and then multiply the low word by 65536^3, 2nd word by 65536^2, 3rdWord by 65536, and the high word remained unchanged

Edited by PB666
Link to comment
Share on other sites

Well if you are going to write it 20 times just #define A 65 and skip the quotation marks, even cleaner.

Pa9CTs7.gif

Loss of type safety, loss of clarity because anyone familiar with the language can't be 100% sure what A is (is it a macro, is it a constant, what's its value?) without going through the source and finding it and all just to save two characters worth of typing? Just write 'A'.

This is my VB code it creates a nice table. (this is excel, little active X button to create a code block and the immediate window)

On the fly I would have to determine the bits then use those to append the value of the target.

Private Sub CommandButton1_Click()
Dim L1, L2, L3, L4, L5, L6, L7, L8
Dim Zbytes(255) As Byte
Count = 0
For L1 = 0 To 1
For L2 = 0 To 1
For L3 = 0 To 1
For L4 = 0 To 1
For L5 = 0 To 1
For L6 = 0 To 1
For L7 = 0 To 1
For L8 = 0 To 1
target = 255 - L1 - L2 * 2 - L3 * 4 - L4 * 8 - L5 * 16 - L6 * 32 - L7 * 64 - L8 * 128
Zbytes(Count) = target
Debug.Print Count; " "; target,
Count = Count + 1
If Count Mod 8 = 0 Then Debug.Print
Next L8
Next L7
Next L6
Next L5
Next L4
Next L3
Next L2
Next L1
Debug.Print
End Sub

Number combination is relevant but order is irrelevant in the pairs.
0 255 1 127 2 191 3 63 4 223 5 95 6 159 7 31
8 239 9 111 10 175 11 47 12 207 13 79 14 143 15 15
16 247 17 119 18 183 19 55 20 215 21 87 22 151 23 23
24 231 25 103 26 167 27 39 28 199 29 71 30 135 31 7
32 251 33 123 34 187 35 59 36 219 37 91 38 155 39 27
40 235 41 107 42 171 43 43 44 203 45 75 46 139 47 11
48 243 49 115 50 179 51 51 52 211 53 83 54 147 55 19
56 227 57 99 58 163 59 35 60 195 61 67 62 131 63 3
64 253 65 125 66 189 67 61 68 221 69 93 70 157 71 29
72 237 73 109 74 173 75 45 76 205 77 77 78 141 79 13
80 245 81 117 82 181 83 53 84 213 85 85 86 149 87 21
88 229 89 101 90 165 91 37 92 197 93 69 94 133 95 5
96 249 97 121 98 185 99 57 100 217 101 89 102 153 103 25
104 233 105 105 106 169 107 41 108 201 109 73 110 137 111 9
112 241 113 113 114 177 115 49 116 209 117 81 118 145 119 17
120 225 121 97 122 161 123 33 124 193 125 65 126 129 127 1
128 254 129 126 130 190 131 62 132 222 133 94 134 158 135 30
136 238 137 110 138 174 139 46 140 206 141 78 142 142 143 14
144 246 145 118 146 182 147 54 148 214 149 86 150 150 151 22
152 230 153 102 154 166 155 38 156 198 157 70 158 134 159 6
160 250 161 122 162 186 163 58 164 218 165 90 166 154 167 26
168 234 169 106 170 170 171 42 172 202 173 74 174 138 175 10
176 242 177 114 178 178 179 50 180 210 181 82 182 146 183 18
184 226 185 98 186 162 187 34 188 194 189 66 190 130 191 2
192 252 193 124 194 188 195 60 196 220 197 92 198 156 199 28
200 236 201 108 202 172 203 44 204 204 205 76 206 140 207 12
208 244 209 116 210 180 211 52 212 212 213 84 214 148 215 20
216 228 217 100 218 164 219 36 220 196 221 68 222 132 223 4
224 248 225 120 226 184 227 56 228 216 229 88 230 152 231 24
232 232 233 104 234 168 235 40 236 200 237 72 238 136 239 8
240 240 241 112 242 176 243 48 244 208 245 80 246 144 247 16
248 224 249 96 250 160 251 32 252 192 253 64 254 128 255 0

I could go to 65536 and do an idiv of to long long into 65536, 65536^2, 65536^3

From here then convert all values with the table and then multiply the low word by 65536^3, 2nd word by 65536^2, 3rdWord by 65536, and the high word remained unchanged

Could have done that without 8 nested loops. Honestly I'd just do the reversing with a lookup table (of course this is C#, I don't know VB):

using System;

public class Test
{
public static ulong ReverseAndFlip( ulong n )
{
return ~Reverse( n );
}

public static ulong Reverse( ulong n )
{
return ( (ulong) smReverseByte[n & 0xFF] << 56
| (ulong) smReverseByte[(n >> 8) & 0xFF] << 48
| (ulong) smReverseByte[(n >> 16) & 0xFF] << 40
| (ulong) smReverseByte[(n >> 24) & 0xFF] << 32
| (ulong) smReverseByte[(n >> 32) & 0xFF] << 24
| (ulong) smReverseByte[(n >> 40) & 0xFF] << 16
| (ulong) smReverseByte[(n >> 48) & 0xFF] << 8
| (ulong) smReverseByte[n >> 56] );
}

static Test()
{
var reverseNibble = new byte[]
{
0x0, 0x8, 0x4, 0xC, 0x2, 0xA, 0x6, 0xE,
0x1, 0x9, 0x5, 0xD, 0x3, 0xB, 0x7, 0xF
};

smReverseByte = new byte[256];
for( int i = 0; i < 0x10; ++i )
{
int ni = i << 4;
for( int j = 0; j < 0x10; ++j )
{
smReverseByte[ni + j] = (byte) ( ( reverseNibble[j] << 4 ) | reverseNibble[i] );
}
}
}

private static readonly byte[] smReverseByte;
}

Link to comment
Share on other sites

How much of a cost am I going to pay if I include a GTK+2.0 file-get gui if I remove the object after I have moved the filenames into a global variable.

This is using MonoDevelop C# only for the GUI. Do I run the gui, save the filenames in a text file or can I move into a C++ routine with out classes.

Link to comment
Share on other sites

Note that you should never, ever test passwords like that, and not just because one could buffer-overflow this code.

And just to add to this, one of the reasons for this is that strcmp() will return as soon as a character doesn't match, so passwords with the first character correct will take longer to check.

I can refer you at my Constant time implementation.

Link to comment
Share on other sites

This is my VB code it creates a nice table. (this is excel, little active X button to create a code block and the immediate window)

On the fly I would have to determine the bits then use those to append the value of the target.

This is not the code you want to show if you need to convince us that you need to resort to assembler to speed up your program.

I know, that's a different program. But if you're writing code that way, it's safe to assume that the path to optimization should be sought in better code, not in faster code. After the third nested loop you should really be thinking "there has to be a better way for this."

Link to comment
Share on other sites

This is not the code you want to show if you need to convince us that you need to resort to assembler to speed up your program.

I know, that's a different program. But if you're writing code that way, it's safe to assume that the path to optimization should be sought in better code, not in faster code. After the third nested loop you should really be thinking "there has to be a better way for this."

It was fast to make a small table, table didn't require many clock cycles so it did not need to be optimized. At best the table would have 65536 values. In fact you missed the whole importance, the table was the basis the loops were simply to make the table. The pointers are the table values which find their own anti-value. The lookup is the time consuming process it is the process that will be repeated.

In VB it would look like this

Table(count) = Anti-value <-------------------------------repeated only once for each element in the table

.

.

.

.

.

Endcoded(1,ID) = 0

Endcoded(1,ID) = EncodeFunction(PositionPointer) <---------Call the cipher function.

Encoded(2,ID) = Table(Encoded(1,ID)) <------------------------------------repeated trillions of times.

.

.

.

The point was that online the general advice was make a table anyway you can, but then use the table and don't try to encode the anti-value on the fly.

Edited by PB666
Link to comment
Share on other sites

This may seem like a completely random injection into this thread, but given the tread is about the sense of the C programming language this should be completely appropriate.

I was trying to write a file grabbing routine and was cloning code from the net and testing it when I ran across this:

       int width, height;
this.GetDefaultSize( out width, out height );
this.Resize( width, height );
FileChooserDialog chooser = new FileChooserDialog("Please select a File to Open", this, FileChooserAction.Open,
"Cancel", ResponseType.Cancel,"Open", ResponseType.Accept );

I went looking through my C/C++ books(5) there is nothing about this anywhere, the object is not designed or mentioned, and surprisingly the code compiles. There was something breifly mention about this

this->month = mn;

What da hail is this, is there some supersecret C club in which these out-of-the-blue object handlers are passed. :^).

Link to comment
Share on other sites

It was fast to make a small table, table didn't require many clock cycles so it did not need to be optimized. At best the table would have 65536 values. In fact you missed the whole importance, the table was the basis the loops were simply to make the table. The pointers are the table values which find their own anti-value. The lookup is the time consuming process it is the process that will be repeated

Well, my point is... if for something as trivial as this you're using convoluted code, what are you using for more complex code? Maybe your "real" code is written much tighter, but in a discussion that edges towards "we're not sure if you're approaching things the right way" this doesn't support your viewpoint.

You emphasize "I need the code to be as fast as possible because I need to do certain things trillions of times." Then you show a routine like this. That just supports my thoughts of "do you NEED to do it trillions of times?" And maybe you do. But for me the impression remains that you're trying to brute force your way out of an O(n2) problem that should really be converted into an O(n log n) problem or something along those lines.

Link to comment
Share on other sites

Well, my point is... if for something as trivial as this you're using convoluted code, what are you using for more complex code? Maybe your "real" code is written much tighter, but in a discussion that edges towards "we're not sure if you're approaching things the right way" this doesn't support your viewpoint.

You emphasize "I need the code to be as fast as possible because I need to do certain things trillions of times." Then you show a routine like this. That just supports my thoughts of "do you NEED to do it trillions of times?" And maybe you do. But for me the impression remains that you're trying to brute force your way out of an O(n2) problem that should really be converted into an O(n log n) problem or something along those lines.

Ask yourself this question, why would google be offering human genome studies the use of their servers if this stuff was easy to do? https://cloud.google.com/genomics/#get-started. Yes trillions of times, data comes in and is processed, lots of data comes and lots of processing is done. Or should you have me then tell nature, stop producing trillions of data. Every person on the planet carries about 6 billion data points, if you gather in 1000 people you have a trillion, a million people is a quadrillion, a billion people is a pentillion. Alignment like space-time, is only local.

You missed the point entirely. I had a problem, how best to do this, the answer given online was make a table, by numerous people. The claim online is that there is no Intel 86-x64 function to reverse the order of the bits.

So I brought the table method to the group and asked is there a better method, potentially one that could be used on the fly. If there is an on-the-fly method then I can apply this at the assembly level and make it fast. If this was the end of the problem, it would be easily solved, but bit reversal/xor tables only work well for tables smaller than 2^30 numbers. Once you get above this you need processing power.

So why would I ask for an on the fly method (note you did not provide one), the answer is that I want to create an array based on values of 2^48 bits, so it would be highly advantageous to have a method on-the-fly that reversed order and 'Not'ed the bits, and alternative is to carve it into 2 @ 24 bits or 3 @ 16 bits(which I could use a table for).

In order to divide into 3 16 bit parts the QW first has to be copied rightshifted 16 copied, and rightshifted again

and copied. All bits higher than 16 need to be removed, to do this take the register, create a dummy register, rightshift 16, leftshift 16, xor with the source register, then lookup. Once the lookup is done IMul register 1 by 4,294,967,296 and add to register 3, register 2 by 65536 and add to register 3. Done.

Next issue is that each needs a lookup value, since 2^48 is too large for memory it will have to be disked, by doing this I can greatly compact the array, and in doing that I don't need to process trillions of values at once.

(Note other post discussing use of a GUI to load parsed data files). So a lookup value would be the number of files = 2^x so that x is the number of bits. Once again you rightshift 48-x bits, leftshift 48-x bits, xor the value with the shifted value. Why 2^48, because this is the minimum size by which heterologous sequence will have a single unique match up to 99.9969%. This might seem like overkill but true heterogeniety is not a common feature of the genome, in fact lack of heterogeniety is also very common.

So for each position we have a position, its encode, the encodes index, its reved-xor encode, its encode index.

Ideally this is all handled by assembly language since I know exactly what I have to do and the minimum use of code to get there, and in fact since the data comes in a text stream, it should be handled in large chucks in an assembly routing, placed in an array, returned to C, then filed in C, in the array set up (10 billions of data).

Next is the search, load file, create encodes in large chucks, define indeces, sort the indeces, search files by index, then open file, look for matches, create a match file, save. Repeat.

Now google has a massively paralleled processing power with a cloud, which means technically having 2^48 or even 2^48 memory locations is not out of the question, having 2^48 files is also not out of the question, but extremely questionable. But the intel 86-x64 has a limitation of about 64gB (2^36) of memory. So basically having a means of dumbing down the data is useful.

Link to comment
Share on other sites

But the intel 86-x64 has a limitation of about 64gB (2^36) of memory. So basically having a means of dumbing down the data is useful.

That 64 GB limit is for 32-bit x86 processors using some ugly tricks. Current x64 processors support up to 256 TB of memory using 48-bit addresses. The architecture itself could support 4 PB (52 bits), but as far as I know, individual processes would still be limited to 256 TB of address space. The biggest x64 servers I've seen have 6 TB of memory (96 slots with 64 GB DIMMs).

Link to comment
Share on other sites

That 64 GB limit is for 32-bit x86 processors using some ugly tricks. Current x64 processors support up to 256 TB of memory using 48-bit addresses. The architecture itself could support 4 PB (52 bits), but as far as I know, individual processes would still be limited to 256 TB of address space. The biggest x64 servers I've seen have 6 TB of memory (96 slots with 64 GB DIMMs).

You can get x86-32 bit past 4GB using paging, this is the same way they was able to get x86-16 to support up to 1 MB using an 20 bit addresses

Then 4GB started to get an problem you got paging support on some windows server versions, however at this time 64 bit was on the roadmap and it was clear it was an temporarily solution.

Should it not support 64 bit not 52? Yes I know it don't support it now because an lack of need.

Remember then I bought my first computer an 286 the 386 memory support was more than anybody would need, so an insane high number :)

Later it was some talk about 64 bit but then as virtual memory for large servers.

- - - Updated - - -

It was fast to make a small table, table didn't require many clock cycles so it did not need to be optimized. At best the table would have 65536 values. In fact you missed the whole importance, the table was the basis the loops were simply to make the table. The pointers are the table values which find their own anti-value. The lookup is the time consuming process it is the process that will be repeated.

In VB it would look like this

Table(count) = Anti-value <-------------------------------repeated only once for each element in the table

.

.

.

.

.

Endcoded(1,ID) = 0

Endcoded(1,ID) = EncodeFunction(PositionPointer) <---------Call the cipher function.

Encoded(2,ID) = Table(Encoded(1,ID)) <------------------------------------repeated trillions of times.

.

.

.

The point was that online the general advice was make a table anyway you can, but then use the table and don't try to encode the anti-value on the fly.

Google is one of the most powerful tools we have.

Google revers byte

http://stackoverflow.com/questions/746171/best-algorithm-for-bit-reversal-from-msb-lsb-to-lsb-msb-in-c

was second.

Without googling I would probably used an array to store the bit values, then two loops one to read, then another to store. I might have needed another array for the mask.

This way I could work directly on integers without having to mess with bit and specialized functions and data types who I would needed internet to look up anyway.

Link to comment
Share on other sites

Should it not support 64 bit not 52? Yes I know it don't support it now because an lack of need.

The 52-bit limit is based on the architecture. In the page table, only the middle 40 bits are used for the physical base address of the page, while the remaining bits are reserved for CPU/OS use. As the native page size is 2^12 bytes, this yields 2^52 bytes of physical address space.

Link to comment
Share on other sites

That 64 GB limit is for 32-bit x86 processors using some ugly tricks. Current x64 processors support up to 256 TB of memory using 48-bit addresses. The architecture itself could support 4 PB (52 bits), but as far as I know, individual processes would still be limited to 256 TB of address space. The biggest x64 servers I've seen have 6 TB of memory (96 slots with 64 GB DIMMs).

Despite being 64GB, Sandy Bridge and Ivy Bridge chipsets only allows maximum of 32GB memory when the chipset first came out. Now there are some 64 and even some 128 out there. Most everything else is in the Xeon range. In most cases this is dependent on module size each module is typically no more than 8 GB and dimms are 16, the manufactures limit this to 2 sets, as memory size increases or slots increase this goes to 8. Increasing the number of dimms slots apparently lowers memory transfer rates. Now there are a few at 64 and even 128.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...