the turring machine is usually the thing you want to get your head around. it pretty much has four instructions, read write increment and decrement. thats an example of a 2-bit instruction set. the thing to understand is that any instruction is ultimately just a number that configures the cpu for a particular task. say by activating the alu and enabling a addition operation or setting an output register for the operation or loading an adresss int a memory register for a write. in case of our turring machine the first bit would select between moving the tape, and read/write. the second bit is for parameters. say which direction to move the "tape" or or which memory operation you want to perform. then there is the concept of a register, you need a place to stick the value retrieved from the tape or to place the bit you want to write.
thats the machine but how do you code that. you want an assembler. all that does is maps each number to a named instruction, say INC (increment), DEC (decrement), LD (load), ST (store). now you really cant do a whole lot with four instructions, granted what you can do would amaze you. for example there is no way to operate on the data in the register, you cant even do anything with the data there. so you just add bits to the instruction that adds more machinery. say a bit thats says 'i want to operate on the register, not the tape', this one bit essentially doubles your instruction set. not only that it can also change the meanings of the other bits. so we can now do four things to the register, write a 1, write a 0, lets call these STR (set register) and CLR (clear register). reading doesn't make much sense yet because there is nothing to do with the value, but you can push data onto the tape. however you can CMP (compare) the tape to the register, and the output goes back into the register. incidentally CMP is the same as a bitwise and, and it would be good to complement that with a OR (bitwise or). once you have four logic operations you can start doing math. but can we use what we have to figure out the others?
so here is your 3-bit instruction set.
INC --increment tape by one cell
DEC --decrement tape by one cell
LD --load the tape value to the register
ST --store the register value on the tape
STR --set the register to one
CLR --set the register to zero
CMP --compare/and the register value with the tape value and write the output to the register
OR --bitwise or the values on the tape and register and write the output to the register
with that you can determine the bitwise not of the value in on the tape:
//we want to get thenot of the current bit in the tape. but we need some memory to do the operation so advance the tape
//we can use the CMP as a not if we control one of the inputs, we can do that by writing a zero onto the tape
//now we need to go back to fetch the value we want to operate on and register it
//go back to the zero we wrote earlier
//now if the value we retrieved is a one CMP will write a zero to the register, if its a zero it will write a one
//and we can put the output back on the tape for later use
that looks a lot like low level code to me, for a very simple computer. if you need a nor gate, you can use four ands and four nots (see here). anyone who produced code to do that should get a bunch of likes.
i should also point out that real asm doesnt look like this. more robust instruction sets often allow you to put data into the instruction, say with an LDI instruction, but that requires allocating more bits you your instruction set. of course that makes the computer more complex and likewise the code. also this theoretical machine doesnt include any machinery to actually load code into the machine, so you would have to key it in manually. you could stick the code on the tape if the machinery had some way to fetch commands from the tape, jump to the end of the program, perform the instruction, and then jump back to the beginning of the program plus a value from a program counter *3 (instructions are 3 bits so every increment of the program counter would translates to 3 INC commands from the start of the program. you also need to keep track of the offset between the start of the program and the beginning of the "memory area".
to do that you need a few things. you need a way to store that instruction so that it can be performed when the hardware moves the tape back to the start of the memory area. easiest way to do that is turn the one bit register into a 4 bit shift register. every time you write a value to the register, the previous values are shifted down until they "fall off" the end. the instruction set does not have any access to values other than first bit, but the other 3 bits can be used by hardware to store the instruction (call it the instruction register). you could then load instructions into it by alternating between LD and INC 3 times, and follow that with another LD to make sure the instruction is aligned right. you need a program counter to index each instruction so you can always get to the next one by incrementing the pc by one, this would be done in hardware since counting is easy to do electronically or even mechanically. you also need to have a offset vector to find where you left off in the memory area relative to the start of the program. initially this would be the number of instructions in the program plus one. this would go up or down with each INC or DEC called from the instruction register (but not ones generated by hardware for the purpose of fetching instructions).
your machine might be preloaded with a tape with the code already on it or it might be loaded from some other device like a punch card reader or even a keypad. execution would start at the first bit of the program. hardware would first call INC programCounter *3 (initially this would be zero and nothing would happen), then do the LD,INC,LD,INC,LD,INC,LD to feed the instruction into the instruction register. hardware would then call DEC programCounter *3 times (again doing nothing initially) to get back to the start of the program. at this point the program counter would be incremented as well. hardware would also call INC offsetVector times to get to the memory area. the operation stored in the register would be executed. if this was an inc or an dec, the offset vector would be adjusted accordingly. hardware would then call DEC offsetVector times to get back to the start of the code. this loop would continue till the end of the program (when the pc has been incremented program size times).
i think that goes far enough down the computer science rabbit hole. just an example of how to make a turing machine do something useful. programs would be huge just to do something simple like add 2 numbers. thats why modern cpus do as much in hardware as possible.