Jump to content

C, C++, C# Programming - what is the sense in this


PB666

Recommended Posts

C simply has "by value" as default mode. You can simply pass in a pointer, and then treat it as a by-reference call. In fact, C++ makes that explicit. void func(int &x) takes integer x as parameter passed by reference.

// C++ style reference.
void foo(int &x)
{
x = 5;
}

int main(void)
{
int y = 7;

foo(y);
printf("y = %d\n", y);

return 0;
}

//C style reference.
void foo(int *x)
{
*x = 5;
}

int main(void)
{
int y = 7;

foo(&y);
printf("y = %d\n", y);

return 0;
}

And this is what code for foo would look like if you want to write it in assembly.

.intel_syntax
.global _foo
_foo:
push ebp
mov ebp, esp

mov edi, [ebp+8]
mov eax, 5
mov [edi], eax

mov esp, ebp
pop ebp
ret

That one will have to be called in C style.

void foo(int *);

int main(void)
{
int y = 7;

foo(&y);
printf("y = %d\n", y);

return 0;
}

Edited by K^2
Link to comment
Share on other sites

C simply has "by value" as default mode. You can simply pass in a pointer, and then treat it as a by-reference call. In fact, C++ makes that explicit. void func(int &x) takes integer x as parameter passed by reference.

// C++ style reference.
void foo(int &x)
{
x = 5;
}

int main(void)
{
int y = 7;

foo(y);
printf("y = %d\n", y);

return 0;
}

this is a bit disconcerting. I'm not going to say i missed this but i have two C++ books and both glossed over this.

the ampersand is nothing but an aliasing identifyer, but because of the way arguments are defined in C++ applying this turns them into a special reference, to a variable when it is passed. Makes sense, but the problem is that once an alias is made it cannot be unmade. If i the create int z = 12 and call foo(z) then I should hope that the previous function had been cleared from scope, otherwise problems.

I don't know if you have this but check out page 32-34 microsoft introduction to C++.....A short programming guide to C++. I don't know what the example intended to do other than confuse the crap out of the reader, but it looks like a survey of all the various ways to make things go wrong with referencing.

Anyway, lets clarify this foo(int &x)creates an alias to a calling variable when a 'funtion' is called. Its the same as saying that x's addresses has a name in two places, y is its birth name and foo.x is one of its names when its in function land. Unlike Vegas what happens in function_land does not stay in function land.

Edited by PB666
Repaired ambiguous reference to x
Link to comment
Share on other sites

this is a bit disconcerting. I'm not going to say i missed this but i have two C++ books and both glossed over this.

the ampersand is nothing but an aliasing identifyer, but because of the way arguments are defined in C++ applying this turns them into a special reference, to a variable when it is passed. Makes sense, but the problem is that once an alias is made it cannot be unmade.

The ampersand serves multiple functions, but in this case it's a pass-by-reference operator, not an aliasing identifier. It simply means that when you call this function, the compiler will ensure somehow that the parameters passed by reference are actually referenced. Usually it will do this by pushing the address of the variable onto the stack (which is the explicit implementation in C that K^2 showed), but C++ abstracts that a bit and leaves the actual implementation up to the compiler. In C#, you do the same thing using the keyword 'ref':


// C# style reference.
void foo(ref int x)
{
x = 5;
}

static void Main()
{
int y = 7;

foo(y);
Console.WriteLine(y);
}

Note that in C#, only value types (int, long, double, float, bool, struct, enum) are ever passed by value. All other types (i.e. classes) are always passed by reference.

If i the create int z = 12 and call foo(z) then I should hope that the previous function had been cleared from scope, otherwise problems.

Every time you call the function, a new set of parameters are pushed onto the stack and used to execute the function. It doesn't matter if it's been cleared from scope or not. You can call the function from inside itself. The only way you can create problems is to declare your local variables inside the function (x in this case) with static scope, which will essentially mean that they're always called by reference. This feature proves useful for recursion or for tracking values.

Anyway, lets clarify this foo(int &x)creates an alias to a calling variable when a 'funtion' is called. Its the same as saying that x has a name in two places, y is its birth name and foo.x is one of its names when its in function land. Ulike Vegas what happens in function_land does not stay in function land.

It's not quite aliasing. Both C and C++ require some understanding of the underlying memory model. Variables describe memory locations that hold values. When I say "int x=0", the compiler assigns the name x to a particular memory location and stores the value 0 there. (Actually it's the linker that assigns memory locations; the compiler simply exports a set of symbols to the linker.) Every time I refer to x, the compiler grabs the value 0 from that memory location. When I pass a variable to a function I can either pass the value of the variable or the address of the memory location where the variable is stored. If I pass the value, then it doesn't matter what I do in the function because the function only has the value , not the variable itself. If I pass the memory location (i.e. pass-by-reference), then what happens in the function affects the value of the variable because it's affecting the same memory location.

Link to comment
Share on other sites

The ampersand serves multiple functions, but in this case it's a pass-by-reference operator, not an aliasing identifier. It simply means that when you call this function, the compiler will ensure somehow that the parameters passed by reference are actually referenced.
I'de actually like to see a literary reference to this, I did not pick this out of a couple of books. It like, gee wow with C++ you can avoid dealing with pointers and here's how, a reference identifyer and btw its just an alias ...........and here's another thing you can do stick them in a function definition. The microsoft description is confusing and I still don't know how they work in struct statements, since MS totally avoids struct issues in C++.

Usually it will do this by pushing the address of the variable onto the stack (which is the explicit implementation in C that K^2 showed), but C++ abstracts that a bit and leaves the actual implementation up to the compiler. In C#, you do the same thing using the keyword 'ref':

We'll leave C# out of this for a while, I don't need this to write ASM integratives, but the question is whether one can reference an array and access the array in ASM, obviously I don't want to pass array variables on a stack as this would superceded the stack, you might want to pass the array dimensions.

I think I need to run a trial of this is C++ and see how it functions, maybe that will clear things up. Thanks for the help.

Note that in C#, only value types (int, long, double, float, bool, struct, enum) are ever passed by value. All other types (i.e. classes) are always passed by reference.

Yes the problem is however are what are the specific stack handling procedures for C# passes to ASM, do they differ from C++?

Every time you call the function, a new set of parameters are pushed onto the stack and used to execute the function. It doesn't matter if it's been cleared from scope or not. You can call the function from inside itself. The only way you can create problems is to declare your local variables inside the function (x in this case) with static scope, which will essentially mean that they're always called by reference. This feature proves useful for recursion or for tracking values.

This is somewhat in contradiction to what is written or maybe there are nuances to it. The issue is that & character creates a reference. At least the way I understand it, the char myString[] = "hey bubba"; &heyBubba = mystring, that heyBubba always refers to the location of "hey bubba" and no other comparable reference can refer to it or potentially alter mystring as long as heyBubba exist.

When I pass a variable to a function I can either pass the value of the variable or the address of the memory location where the variable is stored. If I pass the value, then it doesn't matter what I do in the function because the function only has the value , not the variable itself.

At least in theory, if C is passing the same way to C functions as it is to ASM functions interrupt the stack. Before you return you remove the 'return' register values from the stack and tuck them into memory elsewhere, then you remove the arguments from the stack and change them putting them back, then place return values back into their registers and return. When they are returned however C compiler encode will simply remove them from the stack and make them go bye-bye, unless you stop C from doing this (theoretically by a kobiashi-maru manuever) and then replace the originating variables with the value. This is probably one of those C transformations that the whole GNU world would frown upon. Theoretically you could create a new type of function call (lets say) &foo(argument list)& that tells the compiler to keep track of the address of the sending variable (where else but on the stack) and pulls the arguements off the stacks and places them back into the variables memory location, after that it removes 8 or 16 bytes from the stack pointer per argument and everyone is none the less wiser to the whole ordeal.

The problem with passing references between C and a routine is that what if the routine or C is very big, would the ASM be actually referencing the right address in the right segment. As long as the programs are in the same 64k segment that's not a problem, if my routine is in another segment then that reference also needs to be combined with the appropriate segment address. For this to work seemlessly I need to be able to alter a variable defined in C. main and in mains segment from ASM.main and whatever segment it lies within. If my routine is compiled separately from C and added as a runtime library, how would it work

If I pass the memory location (i.e. pass-by-reference), then what happens in the function affects the value of the variable because it's affecting the same memory location.

We can make this simple, if I compile several functions in different C-runtime libraries can I pass by reference variables and be assurred that the memory location will be accessed. Do the variables need to be declared global and how is this best done in main (or at the structure level?)

Edited by PB666
Link to comment
Share on other sites

char myString[] = "hey bubba"; &heyBubba = mystring, that heyBubba always refers to the location of "hey bubba" and no other comparable reference can refer to it or potentially alter mystring as long as heyBubba exist.

You can't do "&heyBubba = mystring;". The & operator returns an address, which you can't assign to. You could say "char *heyBubba = mystring;" but you can't assign an address directly to a non-pointer variable in C. There's no mechanism for preventing another variable from referring to the same memory location and changing it. The const keyword provides some protection which strength depends on the compiler, but there are usually ways around it with pointer tricks.

We can make this simple, if I compile several functions in different C-runtime libraries can I pass by reference variables and be assurred that the memory location will be accessed. Do the variables need to be declared global and how is this best done in main (or at the structure level?)

Any variable declared with extern scope, or declared at the top level (i.e. outside of any function definition) of a module (*.c file plus included *.h files) will have global scope and will be assigned a symbol by the linker. Typically each module is compiled to an object file (*.o or something), which contains assembly code with linker symbols in place of memory locations, and some information for the linker about how much and what type of memory the module needs. The linker trolls through all of the object files you give it, looking for symbols (function names are symbols too) and creating a symbol table: symbol name, number of bytes needed, initialization value (if any), and special instructions for it (like an explicit address assignment, or an assignment to a specific memory block using preprocessor directives or something.) Once the symbol tables are compiled, it assigns addresses to all the symbols, then pulls all the assembly together into final machine code, with the proper addresses in place of symbols. The linker doesn't care what language your source code is in; it always works with compiled assembly. It has no problem integrating an assembly object file, provided the symbol table exists.

It's worth looking, sometime, at all the intermediate files generated by your compile chain to get a sense of what each module does.

tl;dr - If you pass a parameter by reference, you can always be assured that the correct memory location will be addressed, regardless of whether your passed variable is global in scope or not. The function has to have global scope, but they are by default.

Edited by Mr Shifty
Link to comment
Share on other sites

You can't do "&heyBubba = mystring;". The & operator returns an address, which you can't assign to. You could say "char *heyBubba = mystring;" but you can't assign an address directly to a non-pointer variable in C. There's no mechanism for preventing another variable from referring to the same memory location and changing it. The const keyword provides some protection which strength depends on the compiler, but there are usually ways around it with pointer tricks.
must be a miscommunication, thought we were talking about C++; anyway will look later. Tonight is a coding night, no more reading.
If you pass a parameter by reference, you can always be assured that the correct memory location will be addressed, regardless of whether your passed variable is global in scope or not. The function has to have global scope, but they are by default.

In C, C++ or C#? I know in MS you have to be explicit about scope in many instances, particularly variables.

Link to comment
Share on other sites

You need to actually learn a few things about l-values vs r-values and scope.

Passing parameter by reference has nothing to do with scope at all. You literally pass in a pointer to a memory location where relevant data is stored. Scope is irrelevant at this point. Life time of that data might be relevant. Here is an example of a thing you should NOT be doing.

static int* var1;

void function1(volatile int &var2)
{
var1 = (int*)&var2;
}

void function2(void)
{
volatile int var3 = 7;

function1(var3);
}

void function3(void)
{
volatile int var4 = 12;

printf("*var1 = %d\n", *var1);
}

int main(void)
{
function2();
function3();

return 0;
}

The "unexpected" behavior of this program is that it will print out "*var1 = 12". This has to do with the fact that var4 got allocated to the same place in stack as var3 used to occupy. So instead of printing val3, we're really asking the program to print var4. This is something to be aware of when passing variables by reference. But you have to get creative to actually break a program in this way.

Edited by K^2
Link to comment
Share on other sites

PB666, about your genome processing code: have you profiled your code? Which parts are too slow?

The normal situation when a program is "too slow" is that: 1. the programmer has guessed completely wrongly about where the program is slow; 2. the program spends more than 95% of its time executing less than 5% of the code -- that's the only code that needs to be re-written; 3. changing the algorithm, still using the same language, makes the problem go away.

Unless you profile, you're wasting your time.

Also, modern compilers know almost infinitely more about the hardware they are compiling for than do 99.999% of programmers. Don't try to force the compiler to do things; write simple, clear code and let the compiler do its thing.

Optimization effort (including re-writing in a different language) is completely wasted, unless you profile your code, so you know where the problems are.

Of course if you just want to learn C or C++ or C# (three very different language ecosystems, by the way), then fine: go right ahead. I'm learning some C myself, using 21st Century C and Learn C the Hard Way. But each of those languages will take you about a thousand hours to get reasonable proficiency (sufficient for a well-written 10,000 LoC application); while you're learning the second language you'll lose proficiency in the first, because they're different.

I mean the following as friendly advice: from what you have written, you'd get more value from spending that time (and book money) on learning algorithms, in Python or Lua (or VB.NET, if you prefer). Secondly, if you do use C or C++, get in the habit of using valgrind and gprof every time you compile. (C#: There's a choice of profiling tools in the .NET world, it seems). Oh, and if compiling for your own use on i386, use gcc -march=native to use have gcc automatically use native hardware features like SSE2, ...., SSE4.1, or AVX.

Link to comment
Share on other sites

Optimization effort (including re-writing in a different language) is completely wasted, unless you profile your code, so you know where the problems are.

This is actually wrong.

Roughly speaking, profilers are useful when you're writing software, but less so if you're writing algorithmic code. If your code is a complex combination of simple tasks, you usually need a profiler to see where the bottlenecks are. On the other hand, if your code is a simple combination of complex tasks, you're just a wrong person for the job, if you can't see where the bottlenecks are without any help 95% of the time.

There are also some common pitfalls when using profilers with algorithmic code (like in this case). One is that you may be measuring the wrong thing. It's quite common with algorithmic code that the bottleneck depends on input size, input type and/or the environment. By profiling with a handful of test cases and optimizing the code for them, you may actually be hurting its performance with real data. Thorough performance measurements may take weeks, months, or even years. Another issue is that the measurements themselves are wrong. Measuring the performance changes the behavior of the program and often moves the low-level bottlenecks to different places.

Link to comment
Share on other sites

Measuring the performance changes the behavior of the program and often moves the low-level bottlenecks to different places.

That's usually an indicator that your performance tools weren't written by someone who understands how to profile things quietly. There are ways to profile code with as little overhead as you need. E.g., reading system clock and storing result in a variable requires multiple calls, branches, and potentially cache misses. Whereas a read from TSC to a register is almost free.

Link to comment
Share on other sites

Just wanted to ask, is Lua a good language to learn how to program? What do people think of it, anyway?

I already went through it, so this question is out of pure curiosity.

A bit late to this: I really enjoy Lua, it's tiny but I keep finding more nuances all the time - I'd also not recommend it as a starter language, it's a bit *too* fluid IMO. I learned on Turbo Pascal ( was it always OOP capable? my compiler came with a class library anyway ) & wrote C & Perl in roughly equal measure for a number of years - I'd happily recommend a similar path to anyone wanting to start now, although I've no idea what the equivalent would be these days.

Not coded anything seriously - at least not in C, rarely in anything else - for a decade thanks to a horrendous accident, so anything I have to say is probably best taken with some salt.

Link to comment
Share on other sites

It's history. There were Bell Labs, there was Unix and there was C (early 70s). Mainly, C became popular since Unix itself was re-written from the assembly language into C and Unix was pretty common these days.

Generally, there were also ALGOL, FORTRAN, PASCAL, etc but they were not universal enough and there were many dialects varying from one platform to another, from one compiler to another. So, the fact that there was a commonly accepted language standard added to the popularity of C.

Then came C++ (mainly 'C with classes' and many new adepts followed the OOP paradigm). The language was adapted for IBM PC platform (now known as x86) and quickly become popular together with the platform itself. Certainly there were other languages but C++ compilers generated native code unlike their many counterparts that required runtime libraries. Thus C/C++ became the main tool for application development for x86.

It all runs along the same rails even now. Young people grow up with the knowledge that C++ is a 'must have' language in their portfolio, then they write applications and then there's another generation of programmers grows up with this knowledge.

Now fortunately the CPU speeds and memory are huge and there's no need to count every CPU tick and every memory byte. This situation allowed creating huge frameworks that encapsulated many standard tasks in its classes. Java c C# were born.

C#/Java allows you to quickly develop some tools you need without thinking of memory management, garbage collection, I/O, etc. They allow you to concentrate on the task you really need to solve. But everything comes at a price - you've got .Net Framework or Java VM, you've got slower execution performance and larger memory requirements.

Every task require an adequate tool, that's all. If you need performance or low system requirements use assembly / C / C++. If you need simply to draw a window with a button without any particular requirements - use modern languages because it's simpler.

As for the brace matching - here's my advice - comment any closing brace:


namespace MyNamespace
{
class MyClass
{

while (someCondition)
{
try
{
} // try
catch
{
} // catch
} // while

} // myClass
} // namespace MyNamespace

this way you won't lose anything.

Why didn't I think of that? Thanks!

Link to comment
Share on other sites

That's usually an indicator that your performance tools weren't written by someone who understands how to profile things quietly. There are ways to profile code with as little overhead as you need. E.g., reading system clock and storing result in a variable requires multiple calls, branches, and potentially cache misses. Whereas a read from TSC to a register is almost free.

If the performance tools require special compiler options instead of working directly with the production executable, they're obviously producing wrong results. Similarly, if they execute any CPU instructions or use registers, cache and/or memory, they're by definition altering the behavior of the program.

A profiler is a rough tool. If there is an obvious bottleneck in the code, the profiler can probably find it. If there are multiple and/or less obvious bottlenecks, the profiler may not be able to identify them correctly. C++ code is particularly hard to profile, because a change in one place may lead the optimizer to make different decisions in a seemingly unrelated place.

Link to comment
Share on other sites

If the performance tools require special compiler options instead of working directly with the production executable, they're obviously producing wrong results. Similarly, if they execute any CPU instructions or use registers, cache and/or memory, they're by definition altering the behavior of the program.

Three are cache lines you can pretty much assume are going to be loaded - anything that contains current stack pointers for your process. An rdtsc call and a subtract to memory that's already in cache is going to cost you a few clock cycles, no additional latency, and no cache misses. It's as close as you can get to it being free and still have it do something. You can still mess up your timing by inserting this code in the middle of a tight loop, but if you are wrapping performance counters around something that takes less than 100 cycles to complete, you are just doing it wrong.

With correct usage, the impact of profiling code is smaller than fluctuations you get due to variations in thread-switching and cache miss stalls during normal operation. If the impact you introduce is smaller than fluctuations, it will not affect the results. You will not suddenly see it stall in a place it did not before.

But sure, if you use off-the-shelf profiling tools, and implement them haphazardly without understanding what they do, you'll get poor results. But it's your own fault then.

Link to comment
Share on other sites

Just some light reading. Don't click if you have a slow connection its plus 3603 pages of that will put a highly caffinated clinical insomniac to sleep.

http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-software-developer-manual-325462.pdf

Ok having got that out of the way, I don't fathom how they are implementing 64 bit operations, it seems that some of the code is cryptic (and I don't mean the usual intel I32 codes were written by martians).

So for instance you have alot of restrictions you cannot add a 64 bit integer to a 64 bit integer, 32 is the limit. I am assuming that RAX is the 64 bit length instance of the A register, that RBX, RCX and RDX are the other registers. But there are supposed to be 8 more registers, but not any real way of accessing these. Although I am taking that R8-R15 are the registers and that they only allow 64 bit processes. IOW I can only load and operated 64 bit in these?

e.g.

mov r8, [m1]
adc rax, r8

Link to comment
Share on other sites

You can still mess up your timing by inserting this code in the middle of a tight loop, but if you are wrapping performance counters around something that takes less than 100 cycles to complete, you are just doing it wrong.

In the kind of code I'm talking about, the performance impact of anything taking more than 100 cycles (but less than a second) is usually obvious. We know quite precisely what the code is doing, and there are no black boxes or complex data-dependent call graphs around. The impact of high-level algorithmic choices can be less obvious, while it's hard to determine what takes the most time inside the innermost loops.

With correct usage, the impact of profiling code is smaller than fluctuations you get due to variations in thread-switching and cache miss stalls during normal operation. If the impact you introduce is smaller than fluctuations, it will not affect the results. You will not suddenly see it stall in a place it did not before.

My favorite is a loop that my laptop executes either 0.5 billion or 2 billion iterations/second, depending on whether a certain unrelated piece of code is present in the same compilation unit. The optimizers of modern compilers are incredibly smart, except when they happen to be incredibly stupid or incredibly unpredictable.

Link to comment
Share on other sites

In the kind of code I'm talking about, the performance impact of anything taking more than 100 cycles (but less than a second) is usually obvious.

If you can predict where cache and thread contention/thrashing happens in multi-threaded code that has both complex control structures and tight computational loops, you are a wizard, and laws of human logic do not apply to you. The rest of us need to do frame captures to identify problem areas, and that requires snippets of performance-tracking codes scattered throughout.

My favorite is a loop that my laptop executes either 0.5 billion or 2 billion iterations/second, depending on whether a certain unrelated piece of code is present in the same compilation unit. The optimizers of modern compilers are incredibly smart, except when they happen to be incredibly stupid or incredibly unpredictable.

Again, if your performance analysis code affects optimization, you don't know how to write performance analysis code.

Link to comment
Share on other sites

If you can predict where cache and thread contention/thrashing happens in multi-threaded code that has both complex control structures and tight computational loops, you are a wizard, and laws of human logic do not apply to you. The rest of us need to do frame captures to identify problem areas, and that requires snippets of performance-tracking codes scattered throughout.

Remember that I was talking about simple combinations of complex things, not about complex combinations of simple things. Threads may run for minutes or hours independently. At any level above the innermost loop bodies, there are probably 1-3 sequentially executed tasks that take a nontrivial amount of time, and a number of smaller tasks that are orders of magnitude faster. You typically have no control over what other CPU/memory intensive processes are running at the same time.

Again, if your performance analysis code affects optimization, you don't know how to write performance analysis code.

My point was that it's always a minor change that pushes the optimizer past a critical threshold. You add a single instruction to a function or call it in one more place, and the optimizer decides that it's no longer beneficial to inline the function. That's the fundamental nature of binary choices.

Link to comment
Share on other sites

Remember that I was talking about simple combinations of complex things, not about complex combinations of simple things.

And what do you think is inside these complex things? Anything complex in a computer is just a complex arrangement of simple instructions. But no matter. If all you have is a few complex algorithms, it's even more important to profile to know which one is the bottle neck. Then you dive inside and you profile the simple things it's built out of to see why.

Threads may run for minutes or hours independently. At any level above the innermost loop bodies, there are probably 1-3 sequentially executed tasks that take a nontrivial amount of time, and a number of smaller tasks that are orders of magnitude faster. You typically have no control over what other CPU/memory intensive processes are running at the same time.

The threads can be running in different processes on different CPUs in different rooms on a supercomputer cluster. That doesn't make any difference. You still have to profile each thread to know what actually needs improvement.

My point was that it's always a minor change that pushes the optimizer past a critical threshold. You add a single instruction to a function or call it in one more place, and the optimizer decides that it's no longer beneficial to inline the function. That's the fundamental nature of binary choices.

For all practical purposes, a static __inline will always get inlined. Has to do with scope. You should never, ever rely on optimizer to make these sorts of decisions for you. If it should be inlined, inline it. If it shouldn't, don't. Pretty simple.

Now, if you wanted to be clever about this argument, you'd bring up something like loop unrolling, which is entirely up to optimizer and can still have significant impact. But even that's fairly easy to predict if you understand what optimizer is optimizing.

So I will reiterate. If your profiling code upsets optimization, you are doing it wrong. You need to go back and learn low level optimization. Or at least, let someone who know what they are doing write the profiling code.

Link to comment
Share on other sites

And what do you think is inside these complex things? Anything complex in a computer is just a complex arrangement of simple instructions. But no matter. If all you have is a few complex algorithms, it's even more important to profile to know which one is the bottle neck. Then you dive inside and you profile the simple things it's built out of to see why.

As I said, there are simple combinations of complex things inside those complex things. At any level above the innermost loop bodies, there are typically 1-3 sequentially executed tasks that take a nontrivial amount of time, and those tasks are almost always obvious to anyone. At any level, you can choose any of the nontrivial tasks to optimize, and you see a noticeable improvement in performance. Optimize all of them, and you see a significant improvement.

That's how things work in scientific computing, and in data processing in general. The structure of the code tends to be simple, and the bottlenecks are usually obvious.

For all practical purposes, a static __inline will always get inlined. Has to do with scope. You should never, ever rely on optimizer to make these sorts of decisions for you. If it should be inlined, inline it. If it shouldn't, don't. Pretty simple.

Many compilers treat inline as a hint, which they can choose to ignore. Because scientific software is almost always distributed as source code, you can't rely on compiler-specific behavior.

Link to comment
Share on other sites

Running into a bit of a snag in C++.

Apparently, i am finding that generic C string processing does not exist. Instead string processing functions are left to variable included libraries. In fact, i learned that the char type is good for little more than holding. Manipulating char apparently thows a warning message. I think i have all th tools i need for capturing strings, but the problem is that character processing seems to be very ineffecient, I will have to 2 to 4 type conversions to get what i want.

Does anyone know of a GNU library for C that has a broad spectrum of char comparison, manipulation or conversion functions. Right now because of my IDE i'de like to have something that works well in # and ++.

Link to comment
Share on other sites

Running into a bit of a snag in C++.

Apparently, i am finding that generic C string processing does not exist. Instead string processing functions are left to variable included libraries. In fact, i learned that the char type is good for little more than holding. Manipulating char apparently thows a warning message. I think i have all th tools i need for capturing strings, but the problem is that character processing seems to be very ineffecient, I will have to 2 to 4 type conversions to get what i want.

Does anyone know of a GNU library for C that has a broad spectrum of char comparison, manipulation or conversion functions. Right now because of my IDE i'de like to have something that works well in # and ++.

Why? A string in C is literally an array of characters. string.h has a bunch of functions for manipulating strings. And if you need to do conversion to number or what have you, there is sscanf() function in stdio.h.

You are just spoiled by managed types. These do not exist in C/C++ because they are slow. Want to have fast code? Learn to manage your own data.

Link to comment
Share on other sites

Why? A string in C is literally an array of characters. string.h has a bunch of functions for manipulating strings. And if you need to do conversion to number or what have you, there is sscanf() function in stdio.h.

You are just spoiled by managed types. These do not exist in C/C++ because they are slow. Want to have fast code? Learn to manage your own data.

You like to shoot first and asked questions later. Sscanf() maybe useful in the future but not right now. The base issue was the comparisons weren't working, the compiler was rejecting every data type that I could try for the char type comparison, I thought the 0x would work, it failed so I tried "A" which failed. Not sure why but only immediate decimal is working at the moment. Also tried byte to char. Thats why I asked about comparison and conversion. Wasn't trying to VB your beloved C, simply tying to do a conversion that anyone who knows how to do assembly language can do.

Link to comment
Share on other sites

Because 'A' isn't a string. It's a character. "A" is a string, which contains two characters. 'A' and '\0'. The trailing '\0', which happens to be ASCII 0, is there to indicate that the string has ended.

Here's a demo for you that might help.

#include <stdio.h>
#include <string.h>

int main(void)
{
char* pcPassword = "MySecretPassword"; // This is the correct password. It's a pointer to first character.
char pcBuffer[128]; // This is an array of characters where attempt will be stored.

while(1)
{
printf("Enter password: ");
scanf("%s", pcBuffer); // Read user input into pcBuffer.

if(strcmp(pcPassword, pcBuffer)) // If strings are DIFFERENT...
{
printf("Incorrect! Try again.\n");
}
else // If they are the same...
{
printf("Correct!\n");
break; // Leave the loop.
}
}

return 0;
}

Note that you should never, ever test passwords like that, and not just because one could buffer-overflow this code. But I don't think you're worried about security in your code. This approach is fine if you are just parsing a file, or something. And you can use sscanf and fscanf in exactly the same way.

There are also several variations of that strcmp functions, including ones that ignore case differences and ones that only compare first n characters. Look up the docs. Keep in mind that strcmp will return false if strings are the same. It's counter-intuitive but there are some reasons for it that probably aren't relevant to what you are doing.

P.S. I'm using Hungarian Notation here. The "pc" on variables stands for "Pointer to Char". Technically, char[] and char* aren't the same thing, but in this case, they serve the same purpose, so both are labeled with "pc".

Edited by K^2
Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...