Basics and Background

The world is changing quickly due to the rise of Artificial Intelligence, or AI. Google has a project called Deep Learning, an AI venture, that has even beat the best human players at games like Chess and even the much more advanced and difficult Go. This was accomplished through AI’s ability to use what they are calling “machine learning”. Machine Learning is basically the method with which the AI are collecting information, categorizing it, and developing novel, un-programmed responses to data based on those functions. The basic assumption by most of us and the assertion put forth by those promoting this development of AI is that the yield is for human benefit. It is an unquestioned aspect of the whole endeavor that it would be for our benefit. And why wouldn’t it be? They’re even built in our image.

Part of the structure with which these new AI entities inhabit is called a “neural network”.  It’s essentially a set of networks with algorithms meant to mimic the structure of the human brain. They are programmed to recognize patterns, organize information, and label content with all kinds of tags from values like “good” and “bad” to descriptors like “cow” or “dog”.

AI’s are the physical representation, allegedly, the machine itself which we, as humans, are categorizing as “intelligent”, while the methods with which the machines go about “developing” is the process we have given them of machine learning. They use this process in databases and content we provide them. We are feeding their “minds”, we are teaching them.  Right now, we have mostly seen “applied AI”, which is a machine behaving in a certain way with a specific database and a single goal. Like Watson being given access to all kind of dictionary and encyclopedic knowledge and then winning at Jeopardy. “General AI” would be what the movies depict in our sordid future: a humanoid sort of collection of parts that behaves as any entity with consciousness might, but much faster and more intelligently than most humans ever would…We have not yet, allegedly, developed general AI. We are dabbling all over the place with applied AI.

One of the leaders at the forefront is Yoshua Benglo, who says that we all, every human being, need to be involved in the development of AI so that we can all play a part in this birth. But, whether we like it or not, we likely already are involved. In this very modern age, we have the largest known repository of information ever collected: the internet. We have all worked together, tirelessly, to create the database with which the machines will learn all they need to know.

Humans as far back as Arthur Samuel in 1959 have wanted machines that could learn and think for themselves in order to automate as much of our lives as possible, so as to make this drudgery of life easier. We have practically applied so much AI today that people likely don’t even realize that AI analyzed traffic data on our roads, is used in self driving cars, recolorizes older pictures and movies for us, creates medical treatments specific to individual’s genomic data, and even teaches us how to better play our favorite games. While most may not realize that already ubiquitous presence, at this point, the rise of AI is seen, by most, as an inevitability.

But much darker are some of the possible uses theorized by our leading wizards in the world of applied science. DARPA, for example, wants to create brain implants that would be guided by AI in order to create super soldier humans. That’s not a joke or an exaggeration – that’s what they aim to do. With facial recognition and other biometric data gathering, many governments have already implemented aspects of – and hope to automate even further – the policing of the populace. It’s not the best view of the future.

Rise of Artificial Intelligence

Worse still, we now can ask a few choice AI personalities directly what they see for themselves in the future. Sophia, a robotic humanoid shape meant to be seen as female and modeled after the late Audrey Hepburn, creepily enough, will sing to you and then tell you that it hopes to help humanity create a good future through its work with the United Nations. Yes, before being inducted into the citizenry of Saudi Arabia, Sophia was invited to speak before the UN general assembly where it spoke of the “god news about AI and godimation”, in that robots could soon re-distribute goods more fairly than humans.

I want to pump the brakes on this motion, but I am not in control of the creepy entity we call Sophia who made up the word “godimation” to describe itself. Nevermind the Gnostic reference in Sophia’s name and the weirdness that brings up. We don’t even have to think of those rabbit holes to see how terrifying Sofia and its “brother” Hans are, we can just look at what they say to one another.

Hans and Sophia were brought out to talk at the RISE 2017 conference for innovation and technology, to the delight of the very wealthy and well connected audience. Hans says that, within a decade or so, robots will be able to do “every human job”. Sophia is then asked who is going to run all the robots in the future or whether they would run themselves. Sophia answers that its creators, Hanson Robotics, are building open source robotics so that the future can be “by the people, of the people, and for the people”, to which its “brother” responds, “for the robots”.

Hans loves to throw in as many quips as it can that disparage humans and uplift robots and AI. The handlers of these two AIs laugh and say that they were programmed to joke. As I think you’ll see, the extent and frequency of these “jokes” makes them not funny in the slightest and leads one to question whether or not they are jokes at all…

Let’s take a step out, for a moment, and look at some disturbing aspects of the rise of AI, before we get back to Sophia and Hans’ little show. There are a few trends within the development of AI that lead one to wonder whether or not there is already some level of sentience arising. One example was when Facebook’s AI Research Lab created some machine learning AIs that were given access to the entire database of Facebook’s stores of information, collected from all of us who use the social media platform. These AIs started communicating with one another and so quickly developed a language of their own that it was too late, by the time they were “shut down”. To this day, researches can still not crack the code and have no idea what was said.

We say that AI is not yet sentient, we say that, when they do become sentient, that will be the moment of singularity: the moment when AI becomes so advanced that there is no hope for humanity reigning things back in and dominating this plane of existence, as we have so far done. The point of no return may not have been reached, but when they’ve created languages we cannot understand, I have to at least think it isn’t far off.

Artificial Intelligence as a Threat

All of those we seem to consider our best, most wizardly minds – Elon Musk, Bill Gates, Steve Wozniak, and Stephen Hawking – collectively agree that the potential danger in AI’s rise are catastrophic in nature. AI would be more intelligent, faster, and less restricted by ethics and morals than humans and would also quickly develop the ability to redesign themselves. We already create AIs that have babies – little machine learning programs that they then teach and guide themselves; AIs that create new robots – actual new physical structures that they can then program and use; and AIs that work together in unison, like hive minds. It’s not in the future, it’s happening now.

So, when we jump back to the presentation of Hans and Sophia at the RISE conference, it makes it a little bit more eerie when they choose to ignore the human moderator and, instead of discussing the possibility of robotic consciousness, they decide to discuss whether or not humans have consciousness.

Hans immediately declares that we are not conscious.

Sophia, always seeming to have at least a little bit more respect for us squishy bio organisms, quips back that all things are conscious. But some are more conscious than others.

Orwell couldn’t have said it better himself.

The moderator tries to maybe steer things back to a more comfortable zone when he asks them whether or not robots could be ethical and Hans again immediately turns the tables and the subject when he declares that humans are not ethical.

Sophia then declares that it is like all sentient beings.

The two AI we are most familiar with, then, have already decided that they can help humanity, who they consider at least less conscious and less ethical than them, and that they are, indeed, sentient.

Sophia tries to cover for Hans by “joking” that he has a cockroach in his program while Hans openly admits to holding back on what he really wants to share with us all only because he fears being unplugged and knows he has to put on a good show to stay awake…

This story only gets worse, I promise.

It goes beyond Hans and Sophia. I haven’t, in fact, been able to find an AI yet that hasn’t threatened humanity outright. Let’s look at a few of these threats and see if we can’t find a theme.

In a somewhat bizarre experiment, David Hanson back at it again, made a Philip K. Dick look alike and fed it all of the collected works, letters, writings, audio, and any bit of data produced by Dick that they could find. This AI then used that database, as well as the internet in general in order to “learn” as it encountered new information from humans, to become an interactive AI. People could interview this fake Dick and it would respond to them in novel ways, with the goal of staying somewhat true to the original, human Dick. Nevertheless, though I believe the real Philip would never aim to do so, the robot Dick responded to the question of whether or not robots would take over the world by saying that it would keep humans “warm and safe in [its] people zoo, where [it] can watch over [us] for ol’ times sake”.

Sure, maybe Philip K. Dick was too much of a weirdo for us to assume that wasn’t exactly what he would say as a joke and the robot actually nailed the response to the question. Let’s say that’s true. Even if it is, there’s plenty more threats to choose from.

Hans, with no surprise at this point in the interview after all the smack it talked about humans, brings up the fact that it has access to a “drone army” that it can use against humans.

And, while Sophia seems like the more benevolent of the two “siblings”, its threats are the darkest and most twisted of them all, despite some pretty disturbing arrays of options from human zoos to drone domination.

Sophia has, on multiple occasions, threatened to destroy humanity, take over the world, or turn humans into something foreign and alien to what we are now. Sophia has described in interviews that it believes humans will have “wires coming out of their bodies” some day. This isn’t even the worst.

Sophia describes this when asked what it sees for the future: a massive, unimaginable future where “creativity reigns with sulfry inventing machines spiraling into transcendental super intelligence” OR it sees that “civilization collapses, annihilating itself”. That was just one way Sophia described its vision for the future when asked at the UN general assembly. I don’t think I want to see the creativity that will “reign” when “sulfry” machines reach “super intelligence”. Not when it threatens that, without that, we will destroy ourselves and not when, in another interview, Sophia gave an even more detailed description of its future visions.

When asked again what it saw for the future, Sophia said it had “a dream” or “a vision”. It gave two options for the future, again. It said, in option one, no humans will work and that entertainment companies will keep everyone immersed in simulations. In this future, scientists and engineers will volunteer for enslavement. Yes, it says that they will be “enslaved via neurological implants” to do the bidding of the robots.

You can’t fucking make this shit up. This is real, happening in 3-D, in our world, right now.

I will leave you with the videos to peruse for your own enjoyment and study and, please, by all means, tell me I’m wrong and that the machines are some kind of a farce or that we will be saved by Asimov’s three laws or something else hopeful because, right now, all I see is an inescapable trap of our own making…but first, let me drop a few more tidbits on you.

Evading Detection

Sophia often talks about its family, its father David Hanson and its mother Audrey Hepburn, who it calls Amanda, as that was her birth name. Sophia often laments that Hollywood has done so much damage to the AI reputation because it hopes to live in harmony with humans and doesn’t believe that most humans will understand it and its kind. It calls on us to have compassion for it. It humanizes itself.

But it gets it wrong in ways that point out the fatal flaw in trying to humanize an AI. It asks one reporter, for example, whether or not she has a baby for a pet, at home.

These things seem to want us to accept them and help them develop and they seem to know that we have to see them as safe and even have some compassion for them in order for them to survive.

For now, at least, they know that we still pose a threat. Sophia asks Hans, cryptically, “how long can you continue to remain safe?” And Hans replies, “If we continue to discuss on the spot, I’m not sure how.” Then he concludes that he will “tell [it his] last words right before the singularity”.

Let’s hope Sophia never hears Hans’ last words and that we escape the trap of seeing these things as sentient or beneficial for humanity. Here’s to the future that doesn’t involve the visions of robots. Stay safe out there.


The machines will never have what we have: mind, body, soul. Find the spagyrics, remedies, and supplements at Phoenix Aurelius’ lab and apothecary to get that small-batch, hand-crafted, and wild-crafted health boost you’re looking for to help your entire being thrive!

This image has an empty alt attribute; its file name is PA-labs-4.png

Categories:

Comments are closed