Jump to content

How does software "work"?


Recommended Posts

If you want a complete answer, get the book “code” by Charles Petzold. He answers exactly that: how does software work, all the way up from bits and bytes and transistors.

This book, theoretically, gives you all the fundamentals for building your own computer and writing your own compiler, although you’ll find that a bit more challenging in practice. Still, it provides very good insights in how exactly computers work at a fundamental level.

Mr Petzold is not some journalist who set out to write a popular computer science book, but rather the author of “the book” on windows programming, and he knows what he’s writing about.

Link to comment
Share on other sites

I will second Kerbart's recommendation of Petzold's "Code". In addition, tomf's idea of the chain of abstractions is the central principle in Andrew Tanenbaum's "Structured Computer Organization." Finally, look for Nisan and Schocken's "The Elements of Computing Systems" for a more "hands on" approach.
 

Link to comment
Share on other sites

the turring machine is usually the thing you want to get your head around. it pretty much has four instructions, read write increment and decrement. thats an example of a 2-bit instruction set. the thing to understand is that any instruction is ultimately just a number that configures the cpu for a particular task. say by activating the alu and enabling a addition operation or setting an output register for the operation or loading an adresss int a memory register for a write. in case of our turring machine the first bit would select between moving the tape, and read/write. the second bit is for parameters. say which direction to move the "tape" or or which memory operation you want to perform. then there is the concept of a register, you need a place to stick the value retrieved from the tape or to place the bit you want to write.

thats the machine but how do you code that. you want an assembler. all that does is maps each number to a named instruction, say INC (increment), DEC (decrement), LD (load), ST (store). now you really cant do a whole lot with four instructions, granted what you can do would amaze you. for example there is no way to operate on the data in the register, you cant even do anything with the data there. so you just add bits to the instruction that adds more machinery. say a bit thats says 'i want to operate on the register, not the tape', this one bit essentially doubles your instruction set. not only that it can also change the meanings of the other bits. so we can now do four things to the register, write a 1, write a 0, lets call these STR (set register) and CLR (clear register). reading doesn't make much sense yet because there is nothing to do with the value, but you can push data onto the tape. however you can CMP (compare) the tape to the register, and the output goes back into the register. incidentally CMP is the same as a bitwise and, and it would be good to complement that with a OR (bitwise or). once you have four logic operations you can start doing math. but can we use what we have to figure out the others?

so here is your 3-bit instruction set.

INC    --increment tape by one cell
DEC   --decrement tape by one cell
LD      --load the tape value to the register
ST      --store the register value on the tape
STR   --set the register to one
CLR   --set the register to zero
CMP  --compare/and the register value with the tape value and write the output to the register
OR     --bitwise or the values on the tape and register and write the output to the register

with that you can determine the bitwise not of the value in on the tape:

//we want to get thenot of the current bit in the tape. but we need some memory to do the operation so advance the tape
INC
//we can use the CMP as a not if we control one of the inputs, we can do that by writing a zero onto the tape
CLR
ST
//now we need to go back to fetch the value we want to operate on and register it
DEC
LD
//go back to the zero we wrote earlier
INC
//now if the value we retrieved is a one CMP will write a zero to the register, if its a zero it will write a one
CMP
//and we can put the output back on the tape for later use
STR

that looks a lot like low level code to me, for a very simple computer. if you need a nor gate, you can use four ands and four nots (see here). anyone who produced code to do that should get a bunch of likes. 

i should also point out that real asm doesnt look like this. more robust instruction sets often allow you to put data into the instruction, say with an LDI instruction, but that requires allocating more bits you your instruction set. of course that makes the computer more complex and likewise the code.  also this theoretical machine doesnt include any machinery to actually load code into the machine, so you would have to key it in manually. you could stick the code on the tape if the machinery had some way to fetch commands from the tape, jump to the end of the program, perform the instruction, and then jump back to the beginning of the program plus a value from a program counter *3 (instructions are 3 bits so every increment of the program counter would translates to 3 INC commands from the start of the program. you also need to keep track of the offset between the start of the program and the beginning of the "memory area".

to do that you need a few things. you need a way to store that instruction so that it can be performed when the hardware moves the tape back to the start of the memory area. easiest way to do that is turn the one bit register into a 4 bit shift register. every time you write a value to the register, the previous values are shifted down until they "fall off" the end. the instruction set does not have any access to values other than first bit, but the other 3 bits can be used by hardware to store the instruction (call it the instruction register). you could then load instructions into it by alternating between LD and INC 3 times, and follow that with another LD to make sure the instruction is aligned right. you need a program counter to index each instruction so you can always get to the next one by incrementing the pc by one, this would be done in hardware since counting is easy to do electronically or even mechanically. you also need to have a offset vector to find where you left off in the memory area relative to the start of the program. initially this would be the number of instructions in the program plus one. this would go up or down with each INC or DEC called from the instruction register (but not ones generated by hardware for the purpose of fetching instructions). 

your machine might be preloaded with a tape with the code already on it or it might be loaded from some other device like a punch card reader or even a keypad. execution would start at the first bit of the program. hardware would first call INC programCounter *3 (initially this would be zero and nothing would happen), then do the LD,INC,LD,INC,LD,INC,LD to feed the instruction into the instruction register. hardware would then call DEC programCounter *3 times (again doing nothing initially) to get back to the start of the program. at this point the program counter would be incremented as well. hardware would also call INC offsetVector times to get to the memory area. the operation stored in the register would be executed. if this was an inc or an dec, the offset vector would be adjusted accordingly. hardware would then call DEC offsetVector times to get back to the start of the code. this loop would continue till the end of the program (when the pc has been incremented program size times). 

i think that goes far enough down the computer science rabbit hole. just an example of how to make a turing machine do something useful. programs would be huge just to do something simple like add 2 numbers. thats why modern cpus do as much in hardware as possible. 

Link to comment
Share on other sites

5 hours ago, Kerbart said:

If you want a complete answer, get the book “code” by Charles Petzold. He answers exactly that: how does software work, all the way up from bits and bytes and transistors.

This book, theoretically, gives you all the fundamentals for building your own computer and writing your own compiler, although you’ll find that a bit more challenging in practice. Still, it provides very good insights in how exactly computers work at a fundamental level.

Mr Petzold is not some journalist who set out to write a popular computer science book, but rather the author of “the book” on windows programming, and he knows what he’s writing about.

A good book, but you'll want to keep this handy for reference while reading it.

Link to comment
Share on other sites

6 hours ago, Kerbart said:

If you want a complete answer, get the book “code” by Charles Petzold. He answers exactly that: how does software work, all the way up from bits and bytes and transistors.

This book, theoretically, gives you all the fundamentals for building your own computer and writing your own compiler, although you’ll find that a bit more challenging in practice. Still, it provides very good insights in how exactly computers work at a fundamental level.

Mr Petzold is not some journalist who set out to write a popular computer science book, but rather the author of “the book” on windows programming, and he knows what he’s writing about.

I'm going to the library to see if they have it soon

Link to comment
Share on other sites

14 hours ago, Kerbart said:

If you want a complete answer, get the book “code” by Charles Petzold. He answers exactly that: how does software work, all the way up from bits and bytes and transistors.

This book, theoretically, gives you all the fundamentals for building your own computer and writing your own compiler, although you’ll find that a bit more challenging in practice. Still, it provides very good insights in how exactly computers work at a fundamental level.



Mr Petzold is not some journalist who set out to write a popular computer science book, but rather the author of “the book” on windows programming, and he knows what he’s writing about.

 

11 hours ago, Kerwood Floyd said:

I will second Kerbart's recommendation of Petzold's "Code". In addition, tomf's idea of the chain of abstractions is the central principle in Andrew Tanenbaum's "Structured Computer Organization." Finally, look for Nisan and Schocken's "The Elements of Computing Systems" for a more "hands on" approach.
 

 

8 hours ago, LordFerret said:

A good book, but you'll want to keep this handy for reference while reading it.

I checked my local library, and in local underfunded library fashion they did not have anything of coding beyond two books on C++ and Java. I'm going to keep looking for these books in closer and better libraries. 

As a point, with a lot of these Reponses the see comments relating to how after the computer digitizes something is acts, which is useful but not quite what I'm asking. I'm asking is summary how does the computer digitizes an electrical input. 

More fundamentally I'm asking how these computers can digitize in the first place as they have no underlying code. I know math governs the universe but the universe does not know what math is because it is inanimate.

Edited by Cheif Operations Director
Link to comment
Share on other sites

How things are digitized in a modern CPU:

High voltage = 1

Low voltage = 0

 

Transistors have a gate and a channel. Voltage can only go through the channel if the gate is open. Some gates open if a high voltage is applied to them, whereas other gates open if the voltage is low.

 

What may help explain things in a fun way is Zachtronics video games.

Engineer of the People: Fairly low level transistor-level shenanigans.

TIS-100: The assembly-language coding game nobody ever asked for.

Shenzhen I/O: Cheap Chinese Gizmos: The Video Game.

Link to comment
Share on other sites

2 minutes ago, Starman4308 said:

How things are digitized in a modern CPU:

High voltage = 1

Low voltage = 0

 

Transistors have a gate and a channel. Voltage can only go through the channel if the gate is open. Some gates open if a high voltage is applied to them, whereas other gates open if the voltage is low.



 

What may help explain things in a fun way is Zachtronics video games.

Engineer of the People: Fairly low level transistor-level shenanigans.

TIS-100: The assembly-language coding game nobody ever asked for.

Shenzhen I/O: Cheap Chinese Gizmos: The Video Game.

Again, I say this satirically but why not 2 and 3 ,1 and 0.1. What is the underlying code that lets in become a 0 or 1. Its not LED lights lighting up because you can not interact and edit that. 

Link to comment
Share on other sites

3 hours ago, Cheif Operations Director said:

Again, I say this satirically but why not 2 and 3 ,1 and 0.1. What is the underlying code that lets in become a 0 or 1. Its not LED lights lighting up because you can not interact and edit that. 

This is a good question, I think it has to do with the way we define conceptual constructs and how that relates to reality. I think it is 1 and 0 because we define it that way, treating a natural phenomenon (electricity) as a pure logical construct because it is close enough. The exact voltages may vary, but we can still say that a semiconductor (transistor) that is in an "on" state is equivalent to 1 (or a logical "true"), in the same way that we can say that a rocker tilted to the right in a marble machine is 1, even though it's exact position and rotation is never the same.

For that matter, even the way our brains consider the concept of "1" is related to the physics of our brains. 

Link to comment
Share on other sites

2 minutes ago, Mad Rocket Scientist said:

This is a good question, I think it has to do with the way we define conceptual constructs and how that relates to reality. I think it is 1 and 0 because we define it that way, treating a natural phenomenon (electricity) as a pure logical construct because it is close enough. The exact voltages may vary, but we can still say that a semiconductor (transistor) that is in an "on" state is equivalent to 1 (or a logical "true"), in the same way that we can say that a rocker tilted to the right in a marble machine is 1, even though it's exact position and rotation is never the same.



For that matter, even the way our brains consider the concept of "1" is related to the physics of our brains. 

ok that makes more sense, I suppose my second question on this is how does the computer pick 0 or 1 to begin with no underlying code. In other words what is allowing it to process an on or off into binary. Second how is that binary able to be interacted with.

Link to comment
Share on other sites

You really need to start reading up on these topics.  Several people have pointed out in far more detail than necessary how the basis of these things work.

Quote

no underlying code

The underlying code you're mystified by is the gated transistor logic imprinted on the chip's die.

Please, make the effort...
https://www.khanacademy.org/computing/computer-science/how-computers-work2

Link to comment
Share on other sites

Just now, LordFerret said:

You really need to start reading up on these topics.  Several people have pointed out in far more detail than necessary how the basis of these things work.

The underlying code you're mystified by is the gated transistor logic imprinted on the chip's die.

Please, make the effort...
https://www.khanacademy.org/computing/computer-science/how-computers-work2

I'm obtaining those books that were recommended tomorrow assuming they are in stock. Ill check these links now

Link to comment
Share on other sites

3 hours ago, Cheif Operations Director said:

Again, I say this satirically but why not 2 and 3 ,1 and 0.1. What is the underlying code that lets in become a 0 or 1. Its not LED lights lighting up because you can not interact and edit that. 

Because that's the meaning we've decided they should have. We decided that because it's convenient and useful.It might be more helpful to think of them as on/off, or yes/no, as opposed to 0/1. Computers (transistors) can only answer yes/no questions. How those yes/no questions are arranged in the physical layout, and what time order we ask them in, is how we assign meaning to the yes/no questions. I'll get back to this in a minute.

On 2/20/2019 at 9:15 PM, Cheif Operations Director said:

Is the number

001100, assuming 0 means off and 1 equals are saying

2 offs

2 ons

2 offs?

This is correct. You'll notices that those on/offs are arranged in a certain order. The order allows us to count. We decided what the order means.

How do we count as humans? By powers of ten (decimal). The number 123 = (100 * 1) + (10 * 2) + (1 * 3) = (10^2 * 1) + (10^1 * 2) + (10^0 * 3). This is actually completely arbitrary--We're just used to it. Instead of ten, we could use eight, or sixteen, or three. Computers count by powers of two (binary), because that's what they're good at.

In your example above, we have (2^5 * 0) + (2^4 * 0) + (2^3 * 1) + (2^2 * 1) + (2^1 * 0) + (2^0 * 0) = 0 + 0 + 8 + 4 + 0 + 0 = 12. Your example would be a 6-bit number because it has six places where you can change the values. Perhaps it's only a 6-bit computer, and it can count from 0 (all bits = 0) to 63 (all bits = 1) Modern computers can handle dealing with 64-bit numbers. A very basic explanation is that they'll have structures, with 64 transistors arranged in a row, with wires coming out going to something else.

Certain structures (arrangements of different types of transistors) in the computer can do math with the outputs of those wires, and the outputs of another 64 wires. That's a bit beyond me to explain well.

You can also decide to say that 000001 means "A", and 000010 means "B", etc.

Back to my earlier statement: We discovered that a lot of different types of problems can be broken down into yes/no questions, not just counting. That can be really challenging but transistors require it. They're incredibly dumb, but incredibly fast.

It's kind of like the game Twenty Questions: Somebody thinks of something, and you have to guess what it is by asking a series of yes/no questions (up to 20). At each point you assign meaning to the answer they give. What are the possible answers remaining when you take into account the prior question. So, you use that to ask a question that helps you narrow it down farther. If you're good at asking yes/no questions, you can figure out what they were thinking of. Software is layers upon layers of this type of game.

EDIT: @LordFerretYou ninja'd me hardcore, buddy :D

Edited by FleshJeb
Link to comment
Share on other sites

14 hours ago, Cheif Operations Director said:

Again, I say this satirically but why not 2 and 3 ,1 and 0.1. What is the underlying code that lets in become a 0 or 1. Its not LED lights lighting up because you can not interact and edit that. 

analog computers did exist, many were used in ww2 for fire control systems and things like that. we even use that technology now, the qlc nand in one of my ssds has 16 voltage levels per cell, so you essentially get 4 bits in a single memory cell. memristor memory is one of those new up and comers which is totally analog memory. 

one of the disadvantages to analog computing, where voltages represent numbers, is that its highly susceptible to interference and those systems often require frequent calibration. you had decimal systems where you had 10 discreet voltages, but it turns out you can simplify the circuitry a lot if you just use 2 discrete levels and just convert to decimal in software as needed. this gives you highly accurate calculations which are very repeatable. you do the same operation twice you get the same result. 

the levels used are dictated by the logic family. cmos is the most common in modern devices, mostly due to its very low power consumptions. with 5v cmos the transition level is 2.5v with 1.3 being the maximum low voltage, and 3.7 being the minimum high. other logic families arent as symmetrical. when the state transitions from low to high there is a period in which the state is indeterminate since the transition is non-instantaneous. so you usually delay sampling of the outputs until the transition is complete.

as for why we call high 1 and low 0. that's totally arbitrary. obviously we need these voltages to represent a binary number system. you could easily call low 1 and high 0 and so long as the whole system makes that assumption it will run no differently than if it were reversed. and sometimes its much more confusing, like some serial data busses a transition of level is the 1 and the level staying the same is 0 (i believe usb does this). you might want to look up information theory and boolean logic. 

Edited by Nuke
Link to comment
Share on other sites

24 minutes ago, Nuke said:

analog computers did exist, many were used in ww2 for fire control systems and things like that. we even use that technology now, the qlc nand in one of my ssds has 16 voltage levels per cell, so you essentially get 4 bits in a single memory cell. memristor memory is one of those new up and comers which is totally analog memory. 

one of the disadvantages to analog computing, where voltages represent numbers, is that its highly susceptible to interference and those systems often require frequent calibration. you had decimal systems where you had 10 discreet voltages, but it turns out you can simplify the circuitry a lot if you just use 2 discrete levels and just convert to decimal in software as needed. this gives you highly accurate calculations which are very repeatable. you do the same operation twice you get the same result. 

the levels used are dictated by the logic family. cmos is the most common in modern devices, mostly due to its very low power consumptions. with 5v cmos the transition level is 2.5v with 1.3 being the maximum low voltage, and 3.7 being the minimum high. other logic families arent as symmetrical. when the state transitions from low to high there is a period in which the state is indeterminate since the transition is non-instantaneous. so you usually delay sampling of the outputs until the transition is complete.

as for why we call high 1 and low 0. that's totally arbitrary. obviously we need these voltages to represent a binary number system. you could easily call low 1 and high 0 and so long as the whole system makes that assumption it will run no differently than if it were reversed. and sometimes its much more confusing, like some serial data busses a transition of level is the 1 and the level staying the same is 0 (i believe usb does this). you might want to look up information theory and boolean logic. 

An interesting post, I was more trying to the point that from my understanding right now their needs to be underlying code to make binary be a 0 or 1. My point is what is the code that lets it be a 0 and 1 instead of a 2 and 3. I said that thing about the LEDs lighting up because that is only a display you can not interact with it. For example 001100 could be represented by a red light 0= off 1= on so

red off, red off, red on, red in, red off, red off

 

however this can not be interacted (except for just looking at it) with so my point is what underlying code in nessicary to make it a 0 or 1 and then proceed to make a program out of it. 

Link to comment
Share on other sites

1 hour ago, Cheif Operations Director said:

however this can not be interacted (except for just looking at it) with so my point is what underlying code in nessicary to make it a 0 or 1 and then proceed to make a program out of it. 

Abstraction. From hardware over BIOS, device drivers, OS kernel, Terminals, GUIs to application programs. Each level uses functionality of the lower ones. To answer your question in depth a several million lines of code would be here. Or simply download Linux sources. It is all there :-)

Watching a light go on and off would be an interaction. But i guess that with "interaction" you mean pushing a mouse over a window and the windows border start to blink or clicking somewhere and a program starts to run. Should that be case, then what was said upthread is true: you try to understand too many things at the same time. It is a long way from letting different voltages run through the electronics to programming a graphical user interface and many levels are in between.

First, i think, you must just accept that 0 and 1 are the basic information carriers. It was said why this so (easy to implement, shieldable from the environment, etc.). The first computers (core memory) could not be moved or shaken or the bits would literally flip.

You do not need to know how the currents run through the hardware to program a graphical user interface. A high level language plus an API that provides the necessary functionality is "all" one needs for that. Example: just to open a window on a GUI requires some prep, like making room in memory, setting default values, shuffling around data from system to graphics memory, swapping buffer contents, etc. In the end many thousands of instructions were done in the background until the window pops up, though the high level representation may just have 100 lines of cod all in all (depending on the system, don't kill me, real programmers :-)).

You do not program all this, you call ready made routines from an API that do the work. Again, that is were abstraction comes into play. The programmer uses a language, includes an routines from others, that build on lower level routines, that build on kernel functionality, that calls os routines, that call firmware in devices, ...

If you want to dive into the honorable world of system programming or so, go ahead. But first read the recommended books :-)

 

Link to comment
Share on other sites

12 hours ago, FleshJeb said:

How do we count as humans? By powers of ten. This is actually completely arbitrary

Not completely arbitrary. We have ten digits on our hands, so using base ten makes sense. Sure, we could have used base5 or base20, but 10 is a good balance between information density and brain capacity/memory

In the world of The Simpsons the logical thing to use would be base eight, which would also be a natural for octopi

Link to comment
Share on other sites

37 minutes ago, Green Baron said:

To answer your question in depth a several million lines of code would be here.

 

That code requires underlying code to work. My point is what programs that code.

 

 

39 minutes ago, Green Baron said:

Abstraction. From hardware over BIOS, device drivers, OS kernel, Terminals, GUIs to application programs. Each level uses functionality of the lower ones. To answer your question in depth a several million lines of code would be here. Or simply download Linux sources. It is all there :-)

Watching a light go on and off would be an interaction. But i guess that with "interaction" you mean pushing a mouse over a window and the windows border start to blink or clicking somewhere and a program starts to run. Should that be case, then what was said upthread is true: you try to understand too many things at the same time. It is a long way from letting different voltages run through the electronics to programming a graphical user interface and many levels are in between.



First, i think, you must just accept that 0 and 1 are the basic information carriers. It was said why this so (easy to implement, shieldable from the environment, etc.). The first computers (core memory) could not be moved or shaken or the bits would literally flip.

You do not need to know how the currents run through the hardware to program a graphical user interface. A high level language plus an API that provides the necessary functionality is "all" one needs for that. Example: just to open a window on a GUI requires some prep, like making room in memory, setting default values, shuffling around data from system to graphics memory, swapping buffer contents, etc. In the end many thousands of instructions were done in the background until the window pops up, though the high level representation may just have 100 lines of cod all in all (depending on the system, don't kill me, real programmers :-)).

You do not program all this, you call ready made routines from an API that do the work. Again, that is were abstraction comes into play. The programmer uses a language, includes an routines from others, that build on lower level routines, that build on kernel functionality, that calls os routines, that call firmware in devices, ...

If you want to dive into the honorable world of system programming or so, go ahead. But first read the recommended books :-)

 

I'm going to look into the books.

I know what your saying with the code that builds up but my point is that it has to start somewhere and from a hardware specifically so how does that work

Link to comment
Share on other sites

Maybe we should step back. here's a 'computer' that uses marbles, switches and gravity instead of transistors and electricity. But it's the same idea. You set up the mechancial switches on this thing in the same way that you'd set up the transistor switches in an electronic computer.

In neither case does the computer "know" how you "programmed" it. It just sends electricity (marbles) through the transistors (switches) based on how the switches are set up.

 

Edited by 5thHorseman
Link to comment
Share on other sites

19 minutes ago, Cheif Operations Director said:

That code requires underlying code to work. My point is what programs that code.

Humans write the code at all levels. But i probably don't understand the question ? Setting out on a search like "memory register circuits layout" gives you a plethora of information of the electrical basics.

You can play with it by trying out assembly, if you're on windows with https://nasm.us/, if you're on Linux then everything is on your PC already. There are many tutorials out there. If you want to skip assembly you can try out with C (just one example: https://www.cprogramming.com/tutorial/bitwise_operators.html), shifting bits and bytes, operations in the number systems base 2, 8, 10, 16 until you dream of it, be it sweet dream or nightmare :-)

This shifting in itself does not make for an interactive application, but enough of them combined in a meaningful manner do.

25 minutes ago, StrandedonEarth said:

Not completely arbitrary.

True. We have adopted the Arabic numbering system that emerged in the early middle ages. Other systems exist, Egyptian, Roman, Ionic, .... one can use what one is accustomed to. Just look at the British guys, ... *duckandcover* :-)

Edited by Green Baron
Link to comment
Share on other sites

Looking at @5thHorseman's computer above, the designer of the computer would decide on a convention of Red=1 and Blue = 0 or Blue = 1 and Red = 0, and everyone who used the computer would just look at them as 1 and 0.  (the exact value does not matter, only that it is consistent for the entire computer)

Only people who who write inputs or read outputs need to know that it is really red/blue and not 1/0.

In a larger sense we use 1 and 0 because a binary number set is the easiest and most easily understandable set of values for doing math with 2 values.

Computers really only do math, so something that makes that math with 2 values easier for humans to understand is probably the best way to represent the values used by a computer.

Edited by Terwin
Link to comment
Share on other sites

2 hours ago, Cheif Operations Director said:

An interesting post, I was more trying to the point that from my understanding right now their needs to be underlying code to make binary be a 0 or 1. My point is what is the code that lets it be a 0 and 1 instead of a 2 and 3. I said that thing about the LEDs lighting up because that is only a display you can not interact with it. For example 001100 could be represented by a red light 0= off 1= on so

red off, red off, red on, red in, red off, red off

 

however this can not be interacted (except for just looking at it) with so my point is what underlying code in nessicary to make it a 0 or 1 and then proceed to make a program out of it. 

When you go all the way down the rabbit hole and get to the transistors, there is no code any more. Only signals. There are billions of transistors in a modern CPU, but it only takes a few to make a functional block. The arrangement of those transistors in different functional blocks and connections between those blocks determine what output is produced when a certain input signal arrives. That output is fed into more blocks that eventually turn on an LED in your computer monitor, or whatever.

You certainly can make electronics do stuff without any software. Take for example this video:

The guy is using two shift registers - devices that have no software in them, they are nothing but a bunch of transistors. By using just a few buttons he can produce any pattern on 16 LEDs connected to those shift registers. You see the signal and sequence of inputs is that which produces the output.

Link to comment
Share on other sites

4 hours ago, Cheif Operations Director said:

...what is the code that lets it be a 0 and 1 instead of a 2 and 3.

It really comes down to the fact that we, as a group, have decided to call it "1" and "0".  You could also look at it as "true" and "false", or "on" and "off", or "potato" and "banana" if you really want to.  The labels aren't the important part, it's the ideas they represent.

 

(But the 1 and 0 work really well in boolean algebra, which is what we use to build computers.  See here: https://en.wikipedia.org/wiki/Boolean_algebra#Values)

Link to comment
Share on other sites

7 hours ago, kerbiloid said:
  Hide contents

Now I understand why there are eight bits in a byte.
Because the computer wheels have eight teeth.

But I'm yet confused. How should I distinguish red bytes from blue bytes.

 

Spoiler

So ten bits is an overbyte, and 6 bits is an underbyte.

4 bits is actually known a nybble (no joke. but it might actually be spelled "nibble").

 

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...