Jump to content

C, C++, C# Programming - what is the sense in this


PB666

Recommended Posts

There is no single "best scenario, best language or best algorithm" it depends what are you doing:

- if you need to repeat execution of your app multiple time with same data, but you put different parameters in your logic

- repeat execution of app every time with different data set and same parameters in logic

- or else :)

I was reading this and noted your comparison. It does seem like C# will not be a bad launching point but I should note that https://www.jwz.org/doc/java.html

What does he mean by Java doesn't have free().

- - - Updated - - -

PB666: Declare your AddNumbers method static. Then you can call it without creating an object for it, like so:

Class_AddNumbers.AddNumbers(...);

The closest analoge to your methods/functions in VB that you just write and call in a single module probably would just be private static functions in C#. Callable without an object, but only from other functions from the same class. Same in C++, only there, you can also just have static functions in the .cpp file without declaring them in the class header. Note: 'static' means about seven different things in C++ depending on context. And even though the two 'static's here work towards the same goal, they're totally different.

I know, I know, but I was trying just to get a class function just to work, I reduced it to the simplist possible. It was a bit of a cheat for VB, but VB got a cheat anyway because the function can be defined in the same module. However I should add, that I tend to create a separate Functions module to keep Subs and Functions separate from the Main routine, so its not really a cheat. The example was just trying to demonstrate the hassle of trying to create an object in C#.

- - - Updated - - -

If a plugin has to be in C++, just roll with it. There are some nuances, but for your purposes, you can write C code in C++ and it will compile, execute, and probably run either just as fast or close enough. Plus, you can use some of the neat C++ features, like new operator instead of malloc. On the other hand, you don't have to. So if you don't want to declare a single class, don't.

Oh, that could be a plugin to dev environment.

I think it is a dev thing, So use everything in C++ but don't use classes for the intense processing stuff use structs instead, which is more of the way I program in VB anyway (using user defined types).

Link to comment
Share on other sites

Which contradicts your previous point. As per say calculating a log or a square root, all I need for certain comparisons is 3 digits, many of these are statistical so if I am off by 0.001 P at the threshold of 0.05 p it doesn't really matter since I will have to reprocess all the positives anyway. It could for a SquareRoot take Newton's shortcut and it would work.

My point was that the hardware does almost exactly what you are complaining about (determine double the precision, than throw it away), only it makes the CPU more expensive (although likely just in the noise nowadays). Square roots and lns are irrational, so finding the "exact" rounded number is impossible and this isn't done by the hardware (ieee754 allows roughly a bit or two of error for those calculations).

If you really care about the amount of cycles your squareroot calculation takes at most ~20 cycles to run under Haswell. How fast can you write and debug a routine that gets 3 digits of accuracy? And then just how many times will you have to run that routine to justify writing it (500 million times per second it takes you to write and debug, assuming you can get it less than 10 cycles).

Link to comment
Share on other sites

My point was that the hardware does almost exactly what you are complaining about (determine double the precision, than throw it away), only it makes the CPU more expensive (although likely just in the noise nowadays). Square roots and lns are irrational, so finding the "exact" rounded number is impossible and this isn't done by the hardware (ieee754 allows roughly a bit or two of error for those calculations).

If you really care about the amount of cycles your squareroot calculation takes at most ~20 cycles to run under Haswell. How fast can you write and debug a routine that gets 3 digits of accuracy? And then just how many times will you have to run that routine to justify writing it (500 million times per second it takes you to write and debug, assuming you can get it less than 10 cycles).

I don't know, thats a good experiment.

Lets say i am completely clueless, but i have a sprit that knows everything, the sprite knows that sqrt 33333 = 182.57. OK I take the number and round it to either next higher or low square of x10, in this case i know the square root of 40000, so its square root is 200.

33333/200 = 166.65. So I take the average of the two divisors....366.65/2 = 183.33. Close enough, but one more round, 33333/183.33 = 181.82. Average = 182.57

done. For being completely clueless i got to 5 decimal places pretty quickly. BTW the derivative x^2 = 2x so here any deviation i can quickly use calculus to get much closer in the next step.

BTW want to see how many steps to calculate pi to the precision limit of a computer startng with the chord of 180 degrees? This is the measure the ancients used instead of the sine. 2, 2.828, 3, ........ about 16 cycles. 6 will suffice.

Hint the chord of and angle is precisely equal to the sin of its half-angle. cos of half angle = sqrt(1- sin2). chord of half angle = sqrt ((1-cos)2 + sin2)- sin and cosine are of the half angle. multiply that by the number of times the half angle divides the orginal chord and after a few cycles you have pi() https://en.m.wikipedia.org/wiki/Chord_(geometry)

So how would i know. If i take an answer at any step and round it to 4 places, if the rounded number of the next step equals that of the previous step, then no more cycling is required.

Essentially iif an ancient way of calculating can beat advanced mathematics to the punch (I don't need to build the acrpolis to more than say the precision of a gnats balls on the scale of the universe) then . . . .

Link to comment
Share on other sites

C originated at bell labs. It was called C because it was based off the language "B". C++ is based off of C. It was originally was going to be called "C with classes" (so you can do stuff like potato.explode();) but they went with C++ as a reference to the new function [vaiable]++; (e.g. n++;). I don't know how C# came into existence. I think Microsoft was involved.

Souce: Several code books I have read.

Hope this helps.

Luke

--updated--

Oh god. I still have nightmares about my arduino. (the old ones, with only 16kb of code space)

Indeed it did... because their engineers were not able enough to work in straight assembler. I used to work with a guy who basically was there when all that went down.

Link to comment
Share on other sites

Lets say i am completely clueless, but i have a sprit that knows everything, the sprite knows that sqrt 33333 = 182.57. OK I take the number and round it to either next higher or low square of x10, in this case i know the square root of 40000, so its square root is 200.

33333/200 = 166.65. So I take the average of the two divisors....366.65/2 = 183.33. Close enough, but one more round, 33333/183.33 = 181.82. Average = 182.57

done. For being completely clueless i got to 5 decimal places pretty quickly. BTW the derivative x^2 = 2x so here any deviation i can quickly use calculus to get much closer in the next step.

Yeah, except CPU does all of that in its internal hardware using an SSE instruction. So it gives you full double floating point precision in about half the time, because instead of using general computation instructions that you would need, it literally uses a specialized circuit that only does square roots. You will not be able to beat that.

Link to comment
Share on other sites

Yeah, except CPU does all of that in its internal hardware using an SSE instruction. So it gives you full double floating point precision in about half the time, because instead of using general computation instructions that you would need, it literally uses a specialized circuit that only does square roots. You will not be able to beat that.

SSE, jogging the ole memory cells..... i found this http://tommesani.com/index.php/component/content/article/2-simd/46-sse-arithmetic.html

So according to this it can do single 32 bit or simultaneously 4 single 32 bit.

http://assemblyrequired.crashworks.org/timing-square-root/

So i was correct, at least partially don't trust the msvc or gcc compilers it defaults to the x87 math coprocessor instead of SSE. If you want to optimize this you need a c-compiler wuth a preference for SSE. Its amazing the production SSE came out almost 2 decades ago and the software is still defaulting to the previous generation of math coprocessor.

you can beat the hardware. The most surprising thing about these results for me was that it is faster to take a reciprocal square root and multiply it, than it is to use the native sqrt opcode, by an order of magnitude.

And while it may be the case that he can do it, will i be able to convince some implimentation of c on linux to do it?

In this case the fastest was the SSE rsqrtss (x) * x, so does c give access to reciprocal sqrt funtion on the SSE for any implimentation, same can be asked for Java.

here is the c++ version http://www.programmingforums.org/thread29168.html

The problem with the instruction is that it is unpipelined and tacks several clock cycles, as a consequence everything stalls. processed normally keeps the pipeline open.

Amazingly we made it back to the science problem of ithe OP. But at least we have got two steps closer to a resolution. Man i miss the days when i could tell the processor what to do, there would be so little guess work here.

Link to comment
Share on other sites

I was reading this and noted your comparison. It does seem like C# will not be a bad launching point but I should note that https://www.jwz.org/doc/java.html

What does he mean by Java doesn't have free().

This guy is C dev from 2000 ;)

At that time Java garbage collector was bad and making your own memory management was better solution in some cases. Of course Java doesn't have function called free(), but you can force JVM to run garbage collector at time you need it, but you should be advanced dev if you want to do this.

If you are someone who likes authorities... keep in mind that Google is using Java and JVM as their main development environment and they are not people who like to waste resources ;)

Link to comment
Share on other sites

Indeed it did... because their engineers were not able enough to work in straight assembler. I used to work with a guy who basically was there when all that went down.

One urban legend says that the future authors of C (Kenneth Thompson and Dennis Ritchie) liked a computer game, similar to Asteroids. They used to play it on the main server but it wasn't powerful enough to serve about a hundred users and play the game at the same time. Thompson and Ritchie were annoyed with that and thus decided to port the game onto a free PDP-7 that stood at their office. This computer however had no operating system and this forced them to write one (UNIX). In the end they decided to port the game also on a PDP-11 which was a very difficult task because its code was written purely in assembly language. Thus they though to use some kind of a higher level portable language in order to port the OS code from one computer to another. The B language lacked the functionality that allowed to exploit some features of PDP-11 and thus they decided to make C.

Edited by cicatrix
Link to comment
Share on other sites

So i was correct, at least partially don't trust the msvc or gcc compilers it defaults to the x87 math coprocessor instead of SSE. If you want to optimize this you need a c-compiler wuth a preference for SSE. Its amazing the production SSE came out almost 2 decades ago and the software is still defaulting to the previous generation of math coprocessor.

Not exactly. VS defaults to using SSE instructions as far back as 2010, at least. In fact, optimized code will replace sqrt() function calls with the appropriate intrinsic. You need to set a switch if you want to use AVX features, though. GNU compiler requires a switch to compile SSE instructions, but again, it's quite capable. You don't need a fancy compiler for this.

Also, there are SSE instructions for both 32 and 64 bit floating point. However, with 64 bit, you can only do two numbers at a time, unless you are using AVX. With AVX, you can do full 4-dimensional vector math with double precision. All current CPUs support these features, and we do actually make use of these in game dev whenever possible.

In this case the fastest was the SSE rsqrtss (x) * x, so does c give access to reciprocal sqrt funtion on the SSE for any implimentation, same can be asked for Java.

Any modern CPU will support SSE and SSE2 instructions. I don't recall which of the two rsqrtss belongs to, but it's one of these. This is completely independent of the operating system. Furthermore, intrinsics are implemented in a similar way on both GNU and Microsoft compilers. Same code should compile on both.

There is something to keep in mind about the rsqrt, however. It has MUCH lower precision than sqrtss or sqrt(). It's something like 11 binary places, I think. So the precision is a little better than 3 decimal places. If you're happy with that, by all means run with it. But that's the reason why it's faster than sqrtss.

The code for using this flavor would look something like this.

#include <stdio.h>
#include <emmintrin.h>

static __inline float sqrt_fast(float x)
{
float ret;
__m128 y = _mm_set1_ps(x);

y = _mm_mul_ps(_mm_rsqrt_ps(y), y);
_mm_store_ss(&ret, y);

return ret;
}

int main(void)
{
float x = 2.0f;

printf("sqrt(%.2f) = %.3f\n", x, sqrt_fast(x));

return 0;
}

Yes, there's a good reason to use _ps instructions instead of _ss instructions here, although the speed gain is tiny.

P.S. The method above runs through an array at an average rate of 7.5 cycles per sqrt. Using _mm_sqrt_ps() does the same work in about 15 cycles per sqrt. So using reciprocal with multiplication is actually twice as fast. Using a function call from math.h is 50 cycles per sqrt. All times are averaged from 1,000 operations, compiled on VC++ with /O2 flag set.

P.P.S. Same deal with GNU compiler, using -O2 -msse2 switches. The fast sqrt computed in less than 5 cycles per operation. The sqrtps one still came in at 15, and math.h one managed 35.

Edited by K^2
Link to comment
Share on other sites

...The reason why I selected C hash...

Sry 4 wisecracking:

It's called "C sharp", like the note.

Funfact: in musical notation a tone can be lowered or raised a half-tone by using "b" or "#". "C#" is halfway between "C" and "D" and synonymous to "Db".

(not going into the physics and conventions of temperated tunings here...)

just clarifying where the name comes from, carry on...

Link to comment
Share on other sites

I don't know if it's been mentioned due to skimming but..

C always ends up being the lowest common interface. Which is why when the new SHA-3 was announced for example, the outcome of that will be a reference C library. The applications I write in Ruby and the applications I write in Erlang (and you all really should write more Erlang) can all use that C library, and if Java happens to float your boat, you can use that library too. It just so happens that performance is a major issue for a hashing library, but if it wasn't, you still wouldn't find this sort of thing written in C#, because Linux users aren't going to install Mono so that they hopefully point at some way of accessing the code from their favourite platform.

So it doesn't always come down to performance, sometimes it's down to being the right portability solution.

Link to comment
Share on other sites

Not exactly. VS defaults to using SSE instructions as far back as 2010, at least. In fact, optimized code will replace sqrt() function calls with the appropriate intrinsic. You need to set a switch if you want to use AVX features, though. GNU compiler requires a switch to compile SSE instructions, but again, it's quite capable. You don't need a fancy compiler for this.

hmmm, i think the download mentioned the freeware was 2012 or 2013, i will have to check. This entire post will get printed out, "........ Thats the problem with googling, but on the bright side the socket 775 went down in favor of an i5 so at least the CPU has the latest instruction sets

Also, there are SSE instructions for both 32 and 64 bit floating point. However, with 64 bit, you can only do two numbers at a time, unless you are using AVX. With AVX, you can do full 4-dimensional vector math with double precision. All current CPUs support these features, and we do actually make use of these in game dev whenever possible.

When you say the difference is tiny are we talking relative to all game processes or relative a particular process, like perspective change. Anyway i have to see what the full features of the cpu are, and what math functions it supports.

After I did a little researching I found that this is microsoft C++ implimentation code. https://msdn.microsoft.com/en-us/library/ayeb3ayc.aspx. Going to upset some folks but cannot be run on ARM machines, so the code would not be portable anyway.

So . . . . after a long contentious debate we have gravitated to C++ sparing on the class definitions and focusing on the struct definitions. And so finally we find the sense of C is in C++.

- - - Updated - - -

Sry 4 wisecracking:

It's called "C sharp", like the note.

Funfact: in musical notation a tone can be lowered or raised a half-tone by using "b" or "#". "C#" is halfway between "C" and "D" and synonymous to "Db".

(not going into the physics and conventions of temperated tunings here...)

just clarifying where the name comes from, carry on...

hash is a colloquial deprecation, after you try to impliment a class you will know why.

- - - Updated - - -

I don't know if it's been mentioned due to skimming but..

C always ends up being the lowest common interface. Which is why when the new SHA-3 was announced for example, the outcome of that will be a reference C library. The applications I write in Ruby and the applications I write in Erlang (and you all really should write more Erlang) can all use that C library, and if Java happens to float your boat, you can use that library too. It just so happens that performance is a major issue for a hashing library, but if it wasn't, you still wouldn't find this sort of thing written in C#, because Linux users aren't going to install Mono so that they hopefully point at some way of accessing the code from their favourite platform.

So it doesn't always come down to performance, sometimes it's down to being the right portability solution.

Mono was one of the recommened downloads fir ubuntu and Monondevelope IDE is one of two IDE i have installed, i wonder how many other folks have MD on their linux.

Edited by PB666
Link to comment
Share on other sites

One urban legend says that the future authors of C (Kenneth Thompson and Dennis Ritchie) liked a computer game, similar to Asteroids. They used to play it on the main server but it wasn't powerful enough to serve about a hundred users and play the game at the same time. Thompson and Ritchie were annoyed with that and thus decided to port the game onto a free PDP-7 that stood at their office. This computer however had no operating system and this forced them to write one (UNIX). In the end they decided to port the game also on a PDP-11 which was a very difficult task because its code was written purely in assembly language. Thus they though to use some kind of a higher level portable language in order to port the OS code from one computer to another. The B language lacked the functionality that allowed to exploit some features of PDP-11 and thus they decided to make C.

Yes, I've seen/heard this before. C-language and the Unix OS were developed by and for ATT Bell Labs, it belonged to them - their employees Thompson and Ritchie were the creators. PDPs were widely used by ATT and Bell, their switching systems interfaces relied heavily on them... I started my career working for Bell Telephone, as a 1AESS RCMAC programmer. This is ATT Bell Labs we're talking about here... do you seriously think this all came about because of a game? I'd need to see personal memoirs by Thompson or Ritchie claiming so before I believed that.

Link to comment
Share on other sites

I'd need to see personal memoirs by Thompson or Ritchie claiming so before I believed that.

That's why I said it was an urban legend. Here's another one. When I first read it without any references to April, 1st I actually bought it for a real thing.

T h e V O G O N N e w s S e r v i c e

VNS TECHNOLOGY WATCH: [Mike Taylor, VNS Correspondent]

===================== [Littleton, MA, USA ]

COMPUTERWORLD 1 April

CREATORS ADMIT Unix, C HOAX

In an announcement that has stunned the computer industry, Ken Thompson,

Dennis Ritchie and Brian Kernighan admitted that the Unix operating

system and C programming language created by them is an elaborate April

Fools prank kept alive for over 20 years. Speaking at the recent

UnixWorld Software Development Forum, Thompson revealed the following:

"In 1969, AT&&T had just terminated their work with the

GE/Honeywell/AT&&T Multics project. Brian and I had just started

working with an early release of Pascal from Professor Nichlaus Wirth's ETH

labs in Switzerland and we were impressed with its elegant simplicity and

power. Dennis had just finished reading 'Bored of the Rings', a

hilarious National Lampoon parody of the great Tolkien 'Lord of the

Rings' trilogy. As a lark, we decided to do parodies of the Multics

environment and Pascal. Dennis and I were responsible for the operating

environment. We looked at Multics and designed the new system to be as

complex and cryptic as possible to maximize casual users' frustration

levels, calling it Unix as a parody of Multics, as well as other more

risque allusions. Then Dennis and Brian worked on a truly warped

version of Pascal, called 'A'. When we found others were actually

trying to create real programs with A, we quickly added additional

cryptic features and evolved into B, BCPL and finally C. We stopped

when we got a clean compile on the following syntax:

for(;P("\n"),R--;P("|"))for(e=C;e--;P("_"+(*u++/8)%2))P("| "+(*u/4)%2);

To think that modern programmers would try to use a language that

allowed such a statement was beyond our comprehension! We actually

thought of selling this to the Soviets to set their computer science

progress back 20 or more years. Imagine our surprise when AT&&T and

other US corporations actually began trying to use Unix and C! It has

taken them 20 years to develop enough expertise to generate even

marginally useful applications using this 1960's technological parody,

but we are impressed with the tenacity (if not common sense) of the

general Unix and C programmer. In any event, Brian, Dennis and I have

been working exclusively in Pascal on the Apple Macintosh for the past

few years and feel really guilty about the chaos, confusion and truly

bad programming that have resulted from our silly prank so long ago."

Major Unix and C vendors and customers, including AT&&T, Microsoft,

Hewlett-Packard, GTE, NCR, and DEC have refused comment at this time.

Borland International, a leading vendor of Pascal and C tools,

including the popular Turbo Pascal, Turbo C and Turbo C++, stated they

had suspected this for a number of years and would continue to enhance

their Pascal products and halt further efforts to develop C. An IBM

spokesman broke into uncontrolled laughter and had to postpone a

hastily convened news conference concerning the fate of the RS-6000,

merely stating 'VM will be available Real Soon Now'. In a cryptic

statement, Professor Wirth of the ETH institute and father of the

Pascal, Modula 2 and Oberon structured languages, merely stated that P.

T. Barnum was correct.

In a related late-breaking story, usually reliable sources are stating

that a similar confession may be forthcoming from William Gates

concerning the MS-DOS and Windows operating environments. And IBM

spokesman have begun denying that the Virtual Machine (VM) product is

an internal prank gone awry.

{COMPUTERWORLD 1 April}

{contributed by Bernard L. Hayes}

The following is a joke when someone points out the advantages of Pascal over C:

There are 10 known advantages of Pascal over C. I will state only one, but the most important one:

For example, you can write this in C:

for(;P("\n").R-;P("\ "))for(e=3DC;e-;P("_ "+(*u++/8)%2))P("| "+ (*u/4)%2);

In Pascal, you CANNOT.

Edited by cicatrix
Link to comment
Share on other sites

Funny, but you cant write that in basic either.

- - - Updated - - -

If this was the only reason, we'd probably be having this discussion about Fortran 77, or something. There are reasons why C became this de facto standard.

Allen and Gates killed fortran when the invented Altair basic and ported it as the BIOS of the pc in so creating MS and essentually giving basica and Gwbasic away with every new PC, in 1987 you could buy a vB dirt cheap. Most of the non ms apps i used before 90 were either uncompiled basic routines or stand alone routines compiled from basic. With VB fortran was dead, stick a fork in it. What program did your average garage built pc come with, other than Ms-dos? nough said.

Gates basically filtered what people used, wordstar gone, visicalc gone. If you throw your software to the world with all but zero piracy guards on it, then give away the programming language you used to create its kernal, and basically create a national geek fest for all the teenages who think they can program better than you, some of them actually will. Which you know from the start will happen because that is the basis of your own company. So you just scoop up unpatented idea and reverse engineer them into your OS while the hackers end up in corperate using your OS and your software to basically overthrow your competition.

C exists because of the limitations of MS basic, plain and simple, after 1987 C was always ahead, and basic was always playing catchup because it was simply easier to improve C than to improve basic and Codeview and the Macroassembler gave the programmer a good reason for doing it. If assembly level programming was practical on the 80286 or later PC we would not be talking about this. C provides the minimal amount of up front structure and the maximum amount of control.

Edited by PB666
Link to comment
Share on other sites

Fortran would have no trouble competing with Basic for the same reason that C did. It was a real programming language. If something killed Fortran, it was most positively C. Because C actually punishes you for bad programming practices that Fortran and Basic encouraged. Anyone who ever attempted to port a Fortran library will tell you the same.

On a side note, why in the world would you want to port a computationally heavy code to ARM. That is just a silly idea.

Link to comment
Share on other sites

Fortran would have no trouble competing with Basic for the same reason that C did. It was a real programming language. If something killed Fortran, it was most positively C. Because C actually punishes you for bad programming practices that Fortran and Basic encouraged. Anyone who ever attempted to port a Fortran library will tell you the same.

On a side note, why in the world would you want to port a computationally heavy code to ARM. That is just a silly idea.

Basic is a real programming language, If MS had chosen to go with visual fortran and drop VB, we would not be having this conversation now. Basic is gates baby, fortran wasn't, its as simple as that. I learned F 4 in college, not a chance i would drop basic for fortran once quick basic came along. If you were on a PC and you programmed you were married to M$, thats it, cause if u went with other guys lang the next OS may make that program unrunnable. She's quite a jealous wife. Thats why we are talking about linux and stand alone C/C++ programming and dual boot systems, divorce is not an easy thing. Whatever happened to Borlan?

Fortran has the social legacy of punch cards, readers, thick stacks of tree killing paper were a misspelling error occurs in line 656 of your code, although it continues to process instructions. It could not make the transition fast enough to on the fly/paperless debugging.

The problem of basic is rooted in its early evoluiotion, as the saying goes you cannot be a slave to two lords, on the one hand it was BIOS but on the second hand it was a novelty that could be used by some college student staring at a blank screen and capable of typing the word GWBASIC[return] and all of a sudden gates has potentially another outsource of his next ms-dos extension.

Link to comment
Share on other sites

F77 produces linkable binaries. It has performance comparable to C. It's a real programming language. BASIC and QBasic are interpreted languages. They run like snails on sedatives. If C-family of languages has not emerged, you can bet your rear that every single operating system right now would be written in F77 or F90. There are no alternatives that can compete in performance with C and F77. The next best thing is Pascal, but it never had the same optimizations that went into C and F77. Everything else is hopelessly outdated for writing anything like a modern OS. The only reason F77 has lost is because of its syntax, which is much closer to BASIC than C, and that's a very bad thing.

Again, you're arguing from pure ignorance, having zero understanding of how the code is executed and what it involves. You are entirely out of your depth. That should be clear even to you. Why do you insist on continuing to argue? I understand if you were asking questions, but you keep pretending that you know better, when it's demonstrably false.

Link to comment
Share on other sites

F77 produces linkable binaries. It has performance comparable to C. It's a real programming language. BASIC and QBasic are interpreted languages. They run like snails on sedatives. If C-family of languages has not emerged, you can bet your rear that every single operating system right now would be written in F77 or F90. There are no alternatives that can compete in performance with C and F77. The next best thing is Pascal, but it never had the same optimizations that went into C and F77. Everything else is hopelessly outdated for writing anything like a modern OS. The only reason F77 has lost is because of its syntax, which is much closer to BASIC than C, and that's a very bad thing.

Again, you're arguing from pure ignorance, having zero understanding of how the code is executed and what it involves. You are entirely out of your depth. That should be clear even to you. Why do you insist on continuing to argue? I understand if you were asking questions, but you keep pretending that you know better, when it's demonstrably false.

Here comes the arrogance again, the moderators wonder why these conflicts appear?

A programming language is a formal constructed language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to create programs to control the behavior of a machine or to express algorithms.-wikipedia
Link to comment
Share on other sites

BASIC and QBasic are interpreted languages.

I am not using Basic, but other interpreted language - Python.

While my code is interpreted and slower than C, any call for system or math function is as fast as in C, because I can use CPython - interpreter with libraries written in C and compiled, so linux can run them in same way as your C code ;)

Again about Google, Python creator Guido van Rossum once said that Google would use only python, because of few features, but since Java has larger market share they want to lure as much devs as possible to their tools.

Facebook is written in PHP, but they convert and compile their code to C++.

You can do same thing with other interpreted languages.

Of course for someone who is doing physics fortran is always the best.

Also why do you even compare interpreted and complied langs? If you compare monkey and fish on how fast they can climb trees, monkey always wins, but it is very narrow minded since interpreted languages are not created to have same performance and same purpose as compiled langs.

Link to comment
Share on other sites

It's my guess that none of you ever programmed in Fortran for a living (or at least, very few of you). I did (along with several other languages, and 4GLs); Starting in the late 1970's on through the early 2000's. Fortran is still a viable language, and I'd be willing to bet it's still used in the labs for hard-core math calculations, especially being there's still extensive scientific libraries existing for it. Also, you guys are comparing the world of the PC vs Mini's, Micro's, and Mainframes ... and that's like comparing apples to oranges. :rolleyes:

- - - Updated - - -

And another thing I'll throw in, the overhead involved between Fortran and any C variant is like night and day... Fortran is far tighter.

Link to comment
Share on other sites

I am not using Basic, but other interpreted language - Python.

While my code is interpreted and slower than C, any call for system or math function is as fast as in C, because I can use CPython - interpreter with libraries written in C and compiled, so linux can run them in same way as your C code ;)

You can code high level logic in whatever you want. Even Prolog. If you need to compute something, you write your algorithm in C and call it from Python. And no, you don't just call math functions from Python algorithm, because then you lose all of the compiler optimization, and your code runs at about 1/10th the speed at best. There are times when you don't care, but then you don't go around complaining that your computations take too long.

And the fact that I'm referring to BASIC as not a real language has very little to do with the fact that it's interpreted. It's garbage. It has bad syntax, it forces you to use bad programming practices, it's abysmal in performance even for an interpreter, and when something goes wrong in BASIC code, figuring out is next to impossible. The idea that BASIC could have ever beaten out FORTRAN is ludicrous. For starters, you can write an operating system in FORTRAN.

It's my guess that none of you ever programmed in Fortran for a living (or at least, very few of you). I did (along with several other languages, and 4GLs); Starting in the late 1970's on through the early 2000's. Fortran is still a viable language, and I'd be willing to bet it's still used in the labs for hard-core math calculations, especially being there's still extensive scientific libraries existing for it.

Fortran is dying out in labs, and for a very good reason. There are much better libraries for C/C++ these days, and most of the old code written in Fortran, including some of the libraries, are absolutely atrocious. Specifically because in F77, it was very difficult to avoid bad programming practices, and while F90 is quite decent, many Fortran programmers carried their bad habits over. To put it plainly, I've seen goto statements in F90 libraries. That's bad. That shouldn't happen.

Like I've said earlier, though, Fortran could have easily been where C is if there was no C. Out of languages currently out there, barring all C-like languages, Fortran is the next best choice for building OSes and computation libraries.

Link to comment
Share on other sites

It's my guess that none of you ever programmed in Fortran for a living (or at least, very few of you). I did (along with several other languages, and 4GLs); Starting in the late 1970's on through the early 2000's. Fortran is still a viable language, and I'd be willing to bet it's still used in the labs for hard-core math calculations, especially being there's still extensive scientific libraries existing for it. Also, you guys are comparing the world of the PC vs Mini's, Micro's, and Mainframes ... and that's like comparing apples to oranges. :rolleyes:

Fortran is a very viable Pe for things like super computers, and there are extensive apps for it, and very occassionally I see papers that the sup mat has a fortran algorythm in it. The apples and oranges. There are also xwindows/unix boxes that they are using C built apps. Yes, visual stuff are IDE, unlike K2(his statement about the slop in vb in no way denigrates it as a language, and once a very popular one) I dont see that as a problem or a weakness. You can write C in an IDE, but, even in old basic forms you can create an standalone exe without having a runtime files like some later versions of vbasic. I am not a programming ethnocentrist, the question of why fortran did not take off...... but i was there, i saw all the flies swarming around MS, and they cried 'woe are we' everyone is pirating all our software'. But the flies were their lifeblood because it built interest and flooded everything out. The problems with basic all true, but basic was made for machines with 4k of program memory, lol. One of my sisters friend had a singer computer with a 12 inch tv and a tape recodere and a small calculator like keyboard to enter the program. That is were basic comes from. These were novelties. But microsft did keep it evolving.

Edited by PB666
Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...