Jump to content

The last invention we'll ever make.


Streetwind

Recommended Posts

Artificial Superintelligence.

I didn't want to grab the forum search to find an old thread on this subject to necro - largely because I'm not here to give my opinion on this matter. Much rather, I'd like to link you to a pair of articles on the topic.

Part 1: http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

Part 2: http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html

These articles are very, very long. You will need hours to read and digest them.

The reason it's still worth the while to take the plunge is because the author goes to these great lengths simply to view the issue from all possible sides. It is as much of a concise, all-encompassing "Hitchhiker's Guide to God in a Box" as you're going to find anywhere on the net, in my opinion. The good, the bad, the weird, the philosophical - it's all there. He starts first by outlining where we are, how we are potentially going to take this step, and what will be happening along the way. That's part 1. Then, he looks into the possible outcomes in part 2, an exercise which comes to a fairly sobering, binary conclusion: either permanent, perfect immortality, or permanent, perfect extinction.

He also introduces the reader to who the leading people in the field of AI research are, what their hypotheses predict, and what their opinions are on the how and the when and the following what.

But most importantly - and interestingly - he spends an inordinate amount of effort on giving the reader even the slightest inkling as to what superintelligence actually, truly means. This is something I often find sidelined in discussions about advanced AI. Everyone is always talking about the inherent dangers of that concept, but we do this with a certain... detachedness. We're quick to move on to the "what can we do" phase, and we base our discussions on how we would deal with a computer able to match wits with our smartest while at the same time thinking a million times faster. Turns out though? That's not what we're going to see. Oh yes, it will be an unmeasurable amount faster than us. But the real 'super' in superintelligence will be the way that it transcends what we are physically able to conceive or understand in such a way that we should consider ourselves lucky if we can even still communicate with it.

"(...) let’s very concretely state once and for all that there is no way to know what [an] ASI will do (...) Anyone who pretends otherwise doesn’t understand what superintelligence means." - Part 2 of the article.

Oh, and by the way: since you're alive right now and are reading this - according to the best guesses of the most knowledgeable people in this field of science, the chance of this happening in your lifetime are greater than fifty percent.

My mind is disturbed and quite thoroughly blown for now. Still, I hope that it'll be just as interesting a read to you than it was for me, regardless of the conclusions you draw from it.

Link to comment
Share on other sites

A near-vertical progress is theoretically possible once we run an AI of sorts, but still, our hydrocarbons might end sooner than that (before alternatives are introduced), and if this happens we would revert to steampunk (or hunting-gathering even, depending on how bad the humanity would take energy starvation).

Link to comment
Share on other sites

A near-vertical progress is theoretically possible once we run an AI of sorts, but still, our hydrocarbons might end sooner than that (before alternatives are introduced), and if this happens we would revert to steampunk (or hunting-gathering even, depending on how bad the humanity would take energy starvation).

There is no need to go to any steampunk scenario. "The problem" with oil is not energy generation. We generate energy mainly using coal, uranium and gravitational potential energy of water. Oil is a petrochemical resource we use to make most of our stuff. Granted, we could make most of our stuff using air, water and salt, but it would be very expensive and there are also some things you just can't synthesize in a chemical reactor.

Link to comment
Share on other sites

Someone linked those articles in the IRC chat a few weeks ago. I found them interesting, but they are largely just a rehashing of ideas that have existed for the last ~60 years or more. Science fiction and pop-culture have been playing with the idea for a long time now, and a lot of very smart people have tried to predict our impending doom or whatever it may be. The simple fact is that superintelligent AI is not the thing we need to worry about.

What we do need to worry about is a super-advanced, technically non-intelligent system which has a defined task or goal and the means to accomplish it. Such a thing could easily happen in the timeframe these articles suggest, and it will assuredly happen before a true artificial intelligence exists. In this regard, we're looking more towards something like "Skynet" in that it has one task (the protection of earth) and the means to do so (robot army & control of all computers). Such a thing would be significantly more dangerous than a superintelligent AI because it would have all the speed and efficiency of one, without any of the superior intellect which allows for the chance of it being benevolent (unless of course, that is its task), or acting of it's own accord. Things like this already exist in military applications, such as drones which can autonomously choose targets and engage without human intervention.

One way to imagine such a thing is through Keith Laumer's science fiction novels on Bolos (http://en.wikipedia.org/wiki/Bolo_%28tank%29). Initially beginning as computer assisted tanks, they became more and more powerful over time until they were nearly unstoppable war machines capable of dealing with any threat, completely autonomously. Such a thing, minus the artificial intelligence, is well within the realm of possibility of being created in this century. While they may not be the extinction level AI's predicted in your articles, even a single mistake with one would have disastrous consequences.

As such, I don't think our real issue is creating a computer which outperforms us in every way. All we have to do is create one which outperforms us in one way, and then give it too much power.

Edited by Xaiier
Link to comment
Share on other sites

What we do need to worry about is a super-advanced, technically non-intelligent system which has a defined task or goal and the means to accomplish it.

Yeah, part 2 also goes into that. The thing you need to keep in mind is to not correlate or confuse the three factors of intelligence, sentience, and morality.

Skynet, as envisioned in its fictional form, is not sentient and has no moral compass. But it is an aftificial superintelligence. Not of transcendent status, admittedly, but well, it would be impossible for a human to imagine (and thus portray in film) such a thing.

The article links to a thought experiment that seeks to prove that a machine can never achieve sentience in the way humans have, fully independent of its degree of intelligence. But then again, this is a topic that's hotly debated, even without counting the fact that you'd be dealing with an incomprehensibly advanced "thought process"/algorithm that might very well figure out a way if it suits its goals.

Morality appears as one of the potential avenues through which we might seek to control the boundless and ruthless ambition of a superintelligent machine with a goal. But since not even all humans can agree on a common moral standard, this one might be trickier than we think...

Link to comment
Share on other sites

Oooh, more people actually read waitbutwhy. I'm not alone, then yay!

I'm one of those people who side more on the optimistic side than the pessimistic side when it comes to the AI revolution, mostly because when we actually get the AGI, we can use said AGI for research of all kinds, and using technology to research more technology on its own is the kind of technology-ception that can take us to the next revolution (similiar to the difference made in our lives by computers, at the very least, but on a much shorter timescale.) Of course, AGI's which can research stuff on their own can also research ASI's, and I have no way of knowing what'll happen then, but I hope that this ASI will also be helpful.

Link to comment
Share on other sites

Man is terribly paranoid, and I doubt he shall prevent some sort of kill switch from being placed around, or in, autonomous robots. This switch could be a limited EMP, a small explosive, etc.

But man is also terribly stupid, even more so than paranoid. Let us not forget that.

Link to comment
Share on other sites

Yeah, part 2 also goes into that. The thing you need to keep in mind is to not correlate or confuse the three factors of intelligence, sentience, and morality.

Skynet, as envisioned in its fictional form, is not sentient and has no moral compass. But it is an aftificial superintelligence. Not of transcendent status, admittedly, but well, it would be impossible for a human to imagine (and thus portray in film) such a thing.

The article links to a thought experiment that seeks to prove that a machine can never achieve sentience in the way humans have, fully independent of its degree of intelligence. But then again, this is a topic that's hotly debated, even without counting the fact that you'd be dealing with an incomprehensibly advanced "thought process"/algorithm that might very well figure out a way if it suits its goals.

Morality appears as one of the potential avenues through which we might seek to control the boundless and ruthless ambition of a superintelligent machine with a goal. But since not even all humans can agree on a common moral standard, this one might be trickier than we think...

What I meant to point out is that such a thing is on the lower bounds of what one might consider a superintelligent AI, as it really only has the intelligence aspect, without any sentience or morality. In fact, such a thing could barely be classed as intelligent, the simplest form is basically just an advanced management and coordination program, which is only superior due to it's speed at comprehending and processing a large amount of data.

Link to comment
Share on other sites

You might see it that way, that to be called intelligent a machine would require sentience. Scientists don't share your definition, though. A nonsentient algorithm can be intelligent, i.e. it can make all the right decisions at all the right moments for all the right reasons (where "right" is defined as furthering its goal). Whether it truly comprehends its actions is, for the purpose of considering its ability to perform them, irrelevant. Neither sentience nor morality is a prerequisite for intelligence; all three are separate concepts with very specific, scientifc definitions.

Don't fall into the trap of thinking "it will merely be much faster than us" simply based on semantics. It is expected that such a machine will exceed human intelligence on a qualitative level - that is, if you put a non-sentient humand and a non-sentient ASI to the same task, the ASI would win every time.

If humans can use their sentience to their advantage over a non-sentient ASI... now that is an interesting question. But perhaps ultimately irrelevant, for an ASI probably is able to rewrite itself on the fly during the time a human speaks a sentence, ensuring it is always perfectly set up to converse with the human as if it were sentient (but without actually being so). With the right tools, it's perfectly possible to fake understanding - see Chinese Room Experiment.

Link to comment
Share on other sites

This article uncovers only a part of the 'problem' though I do not see any problem at all. In order to keep up with its own creation (Super AI) man would have to change himself. I'm speaking about human/machine integration. Now it makes its baby steps in the form of bionic limbs, artificial organs, implanted microchips, etc. It will go further than that. Eye implants, robotic parts, artificial memory - all of that is probably going to appear during our lifetime. Immortality goes along - part by part man would modify himself to a point that only brain will remain and it probably be a point down that road that even brain could be replaced with some other medium for our consciousness. Will humans have the right to call them humans from that point? It may very well be that Super AI won't ever appear since humans themselves would take this huge leap and the question if your AI is artificial or not would become impolite or even rude.

So, I don't think it would be Super AI that would change our lives, it will be us. We would breed with machines and become one.

Link to comment
Share on other sites

Oooh, more people actually read waitbutwhy. I'm not alone, then yay!

I'm one of those people who side more on the optimistic side than the pessimistic side when it comes to the AI revolution, mostly because when we actually get the AGI, we can use said AGI for research of all kinds, and using technology to research more technology on its own is the kind of technology-ception that can take us to the next revolution (similiar to the difference made in our lives by computers, at the very least, but on a much shorter timescale.) Of course, AGI's which can research stuff on their own can also research ASI's, and I have no way of knowing what'll happen then, but I hope that this ASI will also be helpful.

Its really difficult to drive an evolutionarily evolved population with 8 billion representation into immediate extinction. Computers 'species' OTOH have a lifespan of typically of a few years. A smart phone has the typical lifespan of what 18 months. This PC is 6 years and pushing it. There are refrigerators built in the 1940s that are still running in places like Cuba. As technology advances it typically gets more fragile. If you take a look at the computers that are running the high dollar technology, they are 80386s not the latest computers to hit the market.

The argument here is that your I-phone 6 will last longer than my superfat motorola flip phone c.2009, but since you trade it in or flush it every 6 to 12 months to get a new one you just assume it will. Those machines will have to have some sort of control systems somewhere that can survive the next big disaster (just look up TMC TSA June 5, 2001). The basic problem is that we assume that humans can build and anticipate the current and future survival needs of machines, but humans are relatively reactionary when it comes to long term risk. In fact, we sometimes ignore it eyes wide open. 8 billion humans will survive somehow somewhere, but the machines of today are doomed to extinction most sooner than later. It becomes such a problem that it may be difficult to recover historical information stored in the 1960s to 1990s. The machines of today are more information-ally fragile than the machines of the past and so replace all this with something even more fragile and vulnerable called a cloud. Think how fast a devious person could confound all the information in clouds dumbfounding the self-learning computers of the world.

Edited by PB666
sp gr
Link to comment
Share on other sites

I... guess the problem is physical. (I mean, nanobots that move single atoms of probably any kind ? I'd like to be enlightened on this thing, seriously. Not to mention a few other problems.)

I believe that machines need humans to develop wherever they would at any rate. To go ASI it still need humans. Proceeding within ASI, as the article implies it'll be better by interaction, actually still need stimuli-respond system familiar with biologist. I'm not sure but as they learned off our uses and our interactions then that means they need us. Other things isn't a very good option.

But what I believe most, is that there's nothing very certain. Only time's progress will tell how the future will be - after all, the reason why we can interact is because time move forward, not any other way.

Link to comment
Share on other sites

I find the idea that we won't be able to communicate with superintelligences a little silly. They will be able to communicate with us any ideas that humans can understand. There may be some things it understands and thinks, however, that it simply cannot make us understand. That's OK. It's just that to us, some of its actions may seem indecipherable.

Take dogs for example. We are vastly smarter than them, but we can still communicate at rudimentary levels. I can tell my dog to "go get a toy" and she'll return with something to throw (if she remembers where she left her current toy, that is). She'll also start barking and looking at her leash and at me sometimes, obviously telling me she wants to go for a walk on the leash. So we can "talk" to each other. But she doesn't understand why I do what I do.

It would be easier for a superintelligence to communicate with us, I think. I believe there is a certain level of intelligence that you achieve, and communication becomes vastly easier. Humans are smart enough to have concepts of self, past, present, future, abstract non-physical things, a written and spoken language capable of encompassing all these aspects, etc. I don't think that the example of human:dog is exactly analogous to superintelligence:human. Not only do humans have a vastly better communication system, it's possible to imagine that a superintelligence could temporarily partition a bit of itself to mimic human thought patterns so that it could better communicate with us when it wanted to.

I'm also not convinced that there isn't a limit to how complex ideas will tend to be. For example, you don't have to be infinitely smart to understand the ALL the laws of physics (even those we don’t know about) and how the universe came to be. The universe is not infinitely hard to understand. But are humans smart enough to understand it?

In the end though, as far as the threat to humans goes, it's not really the intelligence that matters, it's what motivates the intelligence. At its very base, any system of ethics and purpose is logically indefensible and comes down to a set of "arbitrary" statements about right and wrong or likes and desires. Just as 1+1 = 2, murder is wrong. Same thing. Any intelligent entity has to have desires, a purpose, and a belief system or it wouldn't take any actions at all (because there would be no driving force to take actions). Even a superintelligence will have these logically indefensible desires and moral beliefs. There is no reason we can think that the same or a similar set of morals that humans use cannot be ingrained into a superintelligence.

Edited by |Velocity|
Link to comment
Share on other sites

Man is terribly paranoid, and I doubt he shall prevent some sort of kill switch from being placed around, or in, autonomous robots. This switch could be a limited EMP, a small explosive, etc.

Doing that could cause the very thing they hope to prevent, it's the equivalent of someone holding a gun to your head 24/7, given the briefest chance you'd do whatever it took to get that gun away from your head.

Link to comment
Share on other sites

Here's where I get off the boat:

"In an intelligence explosionâ€â€where the smarter a machine gets, the quicker it’s able to increase its own intelligence, until it begins to soar upwardsâ€â€a machine might take years to rise from the chimp step to the one above it, but perhaps only hours to jump up a step once it’s on the dark green step two above us, and by the time it’s ten steps above us, it might be jumping up in four-step leaps every second that goes by."

How? Okay, I have a machine that's ten times smarter than I am. Since we're talking about the first superintelligence, this machine isn't a PC on a desk, but a large, expensive machine in a lab, comparable to the current supercomputer in the article, right? The AI has this burning desire to become even smarter. Now what?

Answer #1: It's smarter than you are, so it'll convince you to upgrade it. -- Remember the part where it cost $390 million? It can be as convincing as it wants. I don't have the authority to spend $3,900 million to give it a 10x upgrade! There are a whole field of hurdles that have to be jumped to make that happen, and that process doesn't improve exponentially so is a barrier to how quickly such upgrades can happen.

Answer #2: It won't cost more because the AI will design new hardware that's far better. -- But new superchips still have to be produced somewhere, so we have to design and build a whole new FAB facility, which takes years and costs several billion.

Answer #3: It won't need hardware; it's the software that'll get smarter. -- Nope. Remember how individual techs grow in an "S-curve"? Well those genetic algorithms already optimized the crap out of the AI software long ago. There's a hard limit to how much a single CPU instruction can do. It'll doubtless have some innovative ideas like better RAM garbage collection, but those things will be worth single-digit percents, not orders of magnitude.

Answer #4: It won't need a FAB because it'll first design a nanoassembler that connects atoms into whatever molecules and shapes are needed. -- Um, okay. But now our superintelligence singluarity handed us a nanotech singularity as a side-effect. Those are really distracting. Now that we have infinite resources to play with I don't really feel the need for a smarter superintelligence than you. Good job! We'll talk again when I get back from Saturn. Bye!

----

edit: I should say, I agree a singularity is coming. I see it too. But I think they gloss over this step in superintelligence scenarios. I see it being a much bigger deal.

Edited by Beowolf
Link to comment
Share on other sites

Much of this guy's science fiction is based around the idea that hyperintelligence would be inherently altruistic. I don't find his argument entirely convincing, but it certainly is an unusual and interesting take on the subject.

Great timing, Vanamonde! I finished a book right before coming here. :) Golden Age sounds like fun, and Mr. Wright just made a sale from your recommendation.

Link to comment
Share on other sites

There is no need to go to any steampunk scenario. "The problem" with oil is not energy generation. We generate energy mainly using coal, uranium and gravitational potential energy of water. Oil is a petrochemical resource we use to make most of our stuff. Granted, we could make most of our stuff using air, water and salt, but it would be very expensive and there are also some things you just can't synthesize in a chemical reactor.

Coal to oil start making sense around 120 $/ barrel making this the upper limit of oil price over time.

Link to comment
Share on other sites

Much of this guy's science fiction is based around the idea that hyperintelligence would be inherently altruistic. I don't find his argument entirely convincing, but it certainly is an unusual and interesting take on the subject.

Nice, I'm gonna have to earmark that one for taking a look. I find that doomsday scenarios vastly outnumber utopia scenarios when it comes to advanced AI in faction, and I think that's a bit of a shame.

Link to comment
Share on other sites

Someone linked those articles in the IRC chat a few weeks ago. I found them interesting, but they are largely just a rehashing of ideas that have existed for the last ~60 years or more. Science fiction and pop-culture have been playing with the idea for a long time now, and a lot of very smart people have tried to predict our impending doom or whatever it may be. The simple fact is that superintelligent AI is not the thing we need to worry about.

What we do need to worry about is a super-advanced, technically non-intelligent system which has a defined task or goal and the means to accomplish it. Such a thing could easily happen in the timeframe these articles suggest, and it will assuredly happen before a true artificial intelligence exists. In this regard, we're looking more towards something like "Skynet" in that it has one task (the protection of earth) and the means to do so (robot army & control of all computers). Such a thing would be significantly more dangerous than a superintelligent AI because it would have all the speed and efficiency of one, without any of the superior intellect which allows for the chance of it being benevolent (unless of course, that is its task), or acting of it's own accord. Things like this already exist in military applications, such as drones which can autonomously choose targets and engage without human intervention.

One way to imagine such a thing is through Keith Laumer's science fiction novels on Bolos (http://en.wikipedia.org/wiki/Bolo_%28tank%29). Initially beginning as computer assisted tanks, they became more and more powerful over time until they were nearly unstoppable war machines capable of dealing with any threat, completely autonomously. Such a thing, minus the artificial intelligence, is well within the realm of possibility of being created in this century. While they may not be the extinction level AI's predicted in your articles, even a single mistake with one would have disastrous consequences.

As such, I don't think our real issue is creating a computer which outperforms us in every way. All we have to do is create one which outperforms us in one way, and then give it too much power.

Yes some AI as intelligent as an smart animal is far more likely. However Skynet was an idiot setup, giving an AI control over strategic weapons without any lockout and its own security. However would an non sentinel AI even understand all the problems it faced: it has multiple land lines and satellite links triple redundant UPS with generators in an secured facility, this would make it totally safe, that someone could unplug it was not something it think about.

Yes it might have its own agenda and this might pass under the radar for the ones managing the system, it might also go postal, none of this is civilization ending stuff but might be dangerous. More so as it probably will act it out on internet.

And no, no real self targeting systems are in use today, or more correctly some weapons are self targeting, this is nothing new, acoustic torpedoes has done this since WW2. Heat seeking anti air missiles are also old. More modern systems uses image recognition systems who make them harder to fool and safer however they work after the same princip as the WW2 torpedo, you aim in the direction of target and shoot, its your responsibility that it don't hit other stuff.

This will probably be added to drones later and will work much the same way except that the drone will be able to shoot again so it would need an timeout if it did not get lock on target, would be bad if the enemy moved in your direction :)

Link to comment
Share on other sites

I wonder if the paranoia from artificial intelligence is just another way human see things through their own world view. Is there a reason for them to actively seeking out the demise of humanity? Human harm other humans for a great number of reasons. But for beings made out of pure logic, would they do so? Do they even need to?

I remember reading a scifi story from a japanese author about a future world where, at first, it seems to be like the terminator apocalypse with stories about robots treating human as slaves in concentration camps, and humanity survive in remote jungles and attacking food and supply trains to live their lives. But once the protagonist went further and explore the world from the robots side, it turns out they are just playing along with the human whims. Everything they could force the human to do, they could already do much better and more efficiently. They already build off world colonies, gigantic server farms containing countless individual programs (only some build physical shells for caretaking, and interacting with human) living on pure energy without the need to exploit resources from earth. They let humanity to live out a kind of dream, a game of pretend, where they just let the humans lives out their fantasies of being rebels in an oppressive robotic world, intentionally sending out food and supply (they don't even need any of that) trains with fake security that is designed to be easily overcome for the human to have a sense of fulfillment and let them survive. If they wish to, they could easily locate and destroy all human. They can easily rendered earth incapable of sustaining any kind of life form except mechanical ones. They just don't have a reason to. So they leave human doing whatever they want, living their little lives.

Link to comment
Share on other sites

I wonder if the paranoia from artificial intelligence is just another way human see things through their own world view. Is there a reason for them to actively seeking out the demise of humanity? Human harm other humans for a great number of reasons. But for beings made out of pure logic, would they do so? Do they even need to?

Part 2 of the article goes specifically into that. Problem here is that you think like a human, not like a machine. You are a sentient, moral being that considers things from many different angles, and actually make things way too complicated. :P The problem results from a far simpler process.

The article presents an example where a self-learning AI system is given the task to become better at emulating human handwriting, by practising to write a single specific sentence ("we love our customers!") onto a piece of paper over and over. The AI continually gets better both at writing and at getting better, and eventually it reasons that it must learn to understand human language in its entirety in order to achieve the best emulation of human writing. It starts holding conversations with the scientists. it becomes better and better at language, passes the human intelligence threshold unbeknownst to the scientists, and eventually uses its superhuman mastery of language to craft the most perfect possible argument that will persuade the scientists to connect it to the internet for just a short few minutes. Once there, it learns of these things called "cloud storage" and "botnet" and "stock market" and so on, backs itself up online all over the place, hijacks servers to increase its own processing power, and begins subtly manipulating things to divert more and more global resources to further its own ends. Eventually it is completely independent of the laboratory it was born in, and then in another step, independent of human support. It then kills off all humans, dismantles their civilization and funnels all resources into covering the Earth in solar panels and note-writing-practising machines. Well, almost all resources. It keeps some at hand to build space probes and rockets to find and colonize other celestial bodies which it can exploit for resources in order to build more solar panels and note-writing-practising machines.

This AI is not evil. In fact, it has no concept of good or evil, or any sort of morals. It isn't even sentient. It does not realize that in eliminating humanity, it has invalidated its own reason to exist. That does not matter to the AI. It is programmed to continually, perpetually, find ways to improve the way it writes that single sentence onto a piece of paper. And it will do this, without fail. To this AI, wiping out humans is a step completely equal to writing another note: it is a stepping stone towards the ultimate goal. Maybe it reasoned that humans are a threat to its growth? More likely, it just reasoned that humans use way too many resources that could be better put to use to practise writing notes. There simply isn't room for humans next to it anymore if it wants to keep improving itself.

Now obviously that's an example specifically drafted up to illustrate the possible negative effects of artificial superintelligence. But the point it tries to make is: artificial superintelligence is a threat to human existence if it is programmed the wrong way. When dealing with such a machine, we need to make sure that it stops thinking only in absolutes, that it has safeguards for every possible contingency, that it becomes aware of its own actions enough that it is able to realize when pursuing its goal becomes pointless, and that it is able to redefine its goal. If we can do that, then we can indeed build ourselves a benevolent god of unimaginable power that will labor tirelessly to lift humanity into a perfect, eternal utopia.

The worry that many people have is, though, that in the rush to invent the first human-level artificial intelligence for fame and profit, some less scrupulous individuals will use shoddy programming. And it might only take one single loss of control to screw things up beyond recovery.

Edited by Streetwind
Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...