AI and Unemployment; AI and Machine Learning; Stars and Black Holes. It is my pleasure once again to publish (below) Paul Bassett’s exciting un-edited contributions to these areas. Feedback is welcome.

Paul Bassett photoPaul’s bio with photo021614_2010_CANCERWorks1.jpg

Terry L. Hill, PhD. Retired Professor, Administrator, Social Scientist, Blog Host

____________________________________________________________________________________________

A. Is AI (artificial intelligence) to replace human workers? How far from now do you see it happening? What would the unemployed do?

“About 150 years ago, the industrial revolution began to replace muscle power. Almost 75 years ago, the digital revolution began to replace brain power. Whereas early digital computers automated number crunching tasks such as bookkeeping and payroll, the last 10 years has seen the rise of machines that have learned to do many things, such as face recognition, better than we can. Ever more work that used to require human judgement and creativity is now being done by AI.

A crisis is building. An ever larger fraction of jobs are low-level and part-time. There is continuing downward pressure on wages. More and more wealth is concentrating in an ever smaller fraction of the population.

We are approaching an era when most policy-making and wealth-generation can be done better by AI. As Yuval Harari predicts in his latest best seller, Homo Deus: A Brief History of Tomorrow, the skills needed to compete with AI will not be attainable by the vast majority, causing them to devolve into “the useless class”. This has existential consequences for governments and societies.

Having done a lot of thinking about these issues, I am pessimistic.

If we survive at all, my hope is for our current efforts to create AI-prosthetics for the disabled, to morph into brain-body boosters for normal people. Each of us could become unique in our sensory inputs; we could evolve our DNA to have the bodies we want, to live as long as we want, and to be very smart in unique ways. Our social groupings and interactions should be motivated by pleasure and creativity that mutually contributes to the overall good.

If you think that sounds utopian, think again. Such a future is to hard for most of us to wrap our heads around – way too scary to embrace. Even worse: Intelligence in large doses is toxic! Here’s why:

As the pace of innovation speeds up, technologies that affect the whole world generate increasingly complex side-effects and feedback loops. Chaos Theory tells us that many of them will be beyond any intelligence (natural or artificial) to anticipate or avoid. Malicious behaviour abounds. It only takes one psychopathic zillionaire to decide it would be fun to create a Gene Drive  that unleashes global bio-warfare, not to mention chemical and nuclear wars. Even without malicious intent, sooner or later a “perfect storm” of mutually exacerbating, globally destructive effects will do us in. The number of ways this can happen is growing exponentially. For example, one which is now underway and essentially unstoppable, is Rising Acidity in the Ocean: it could, as it has done in the distant past, easily produce mass extinctions and starvation. Societal collapse, pandemics, and the like, will surely follow.

It’s telling that in over 60 years of trying, the search for extraterrestrial intelligence in the universe chock-a-block with planets, has come up with nothing, zip, nada. Sorry to sound so pessimistic, but it’s supremely ironic that intelligence has enabled us to dominate every living thing on this planet – except each other.  Ah, there’s the rub. I beseech you, dear reader, to think hard about these inconvenient truths and figure out how to avoid the consequences.”

Host’s Comment:   My sense is that this dire ‘deterministic’ scenario need not happen, and although there is the common example of a butterfly flapping its wings determining a hurricane’s occurrence at a later time, chaotic systems can be deterministic, but are not predictable, for however accurate a measurement of the state at a time, a variation smaller than any it can detect may be responsible for a difference in the eventual outcome. And as much as there are zillionaires, few if any, would undertake self-destruction as described above, due to the presence of interfering  socio-political ‘checks-and-balances’ on their behaviour. The power of public opinion is also increasing exponentially due primarily to almost universal media access.

B. When will software developers get replace by AI?

“This question does not have an all-or-nothing answer. Algorithms have been around for a long time that can do grunt programming that programmers used to have to do by hand. For example, when computers were first invented, programmers had to work in binary code – strings of 1s and 0s. Binary programming is extremely tedious and error-prone. So some smart people came up with programs that translate one language into another, such as Fortran and COBOL into binary code. Such a program is called a Compiler – Wikipedia, and it can save vast amounts of programming effort.

Other algorithms routinely generate code and adapt human-written code to novel situations. And in doing so they also eliminate the need for may programming jobs. See for example, Frame technology (software engineering) – Wikipedia. Such programmer-saving systems do not involve machine learning. But AI will eventually learn to code solutions to ill-defined problems, which give rise to the hardest programming jobs. Among the last to be eliminated will be the programmers who give rise to AI itself.

When that will happen is anyone’s guess;  but it is not a matter of  ‘if’. But when it does happen, people will continue to program for their own interest and pleasure.”

Host’s comment: How will the ‘final’ interface change between programmers and AI systems? What if it is in my individual interest and pleasure to create ‘rogue’ AI systems from mainstream?

 

C. Can you explain how the underlying technology of AI and Machine Learning algorithms work?

“Machine learning encompasses a variety of algorithmic techniques, of which deep learning is one. While the details can get complicated, the concept of how deep learning works is not hard. It approximates the way your brain does it.

Each neuron in your brain sends electrical signals (typically to thousands of other neurons) along wire-like fibers. But each fiber does not connect directly to its target neuron; rather, it ends at a tiny gap called a synapse. The incoming signal enables molecules called neurotransmitters to cross the gap and stimulate the outgoing fiber to relay the signal to its neuron. The strength of the relayed signal depends on the number of neurotransmitter molecules. The target neuron combines the strengths of the signals from all the incoming synapses, and if the total strength is high enough, the target neuron fires signals to all its target neurons, and so on goes the signal propagation, throughout the huge network of your brain.

What does any of that have to do with learning? The secret is in the synapses. If  a pair of neurons participate in a chain-reaction flow of signals, the synapse between them gets stronger – more neurotransmitter molecules cross the gap, increasing the strength  of the relayed signal. Canadian psychologist Donald Hebb, captured this idea in 1949 with the phrase “Neurons that fire together, wire together”. (And conversely, when neurons fail to fire together, the synapses between them get weaker). You unwittingly train a network of neurons in your brain to encode new information by increasing and decreasing various synaptic strengths. Your brain follows Hebb’s principle automatically.

Deep learning works in an analogous fashion. Each computer neuron is a bit of logic that multiplies its inputs – the on/off signals arriving from all the neurons that can send it signals – by weights that stimulate the synaptic strengths between pairs of biological neurons. If the weighted sum of a neuron’s inputs crosses a threshold value, it sends a signal to all its downstream neurons, just as biological neurons do. The neurons are organized in layers such that every neuron in one layer connects to every neuron in the next layer. (Other connection schemes are possible and are being tried) The bottom layer takes inputs from the outside, such as the bits that make up a photo. Typically, the top layer is one neuron; it fires or not, depending on how all the other neurons fed their weighted signals though their layers up to the top. When there are two or more layers between top and bottom, the network is called “deep”. In practice, there can be hundreds, even thousands of layers.

Such a network can be trained to do and recognize all manner of things. Thus it can recognize an unlimited variety of cats based on repeatedly seeing a fixed set of training instances, say, photos of cats and non-cats. When the network guesses wrong, the machine weakens all the synapses (i.e., weights) that contribute to its wrong guess. Similarly, when it guesses right, the machine strengthens all its participating synapses. This synapse-charging rule is simple and fixed. Yet, it is amazing what deep learning networks can learn this way.”

Host’s comment:  A media visualization would have helped me, but I feel I have the gist. Good description!

D. Stars and Black Holes

“When stars come to the end of their lives, they can die in different ways, depending on their heaviness. Stars like the Sun become dense cinders called ‘white dwarfs”. A typical white dwarf would be 30,000 kilometers across, and a cubic centimeter of it would weigh 1,000 kilograms.

Stars that are 10-20 times heavier than the Sun end their lives in stupendous explosions, called supernovas, that can briefly outshine their parent galaxies. What’s left is called a neutron star – it’s made of nothing but neutrons – that is about 10 kilometers across. A cubic centimeter of it would weigh 500 billion kilograms, or about one-half a cubic kilometer of water.

Stars more than 20 times heavier than the Sun will also in supernova explosions, but now a remnant is so dense that it forms a ball of space where gravity is so strong nothing can escape – even light shining from inside can’t get out. Hence the name “black hole”. The size of this ball could be the size of a neutron star or be larger, depending on the size of the original star. Scientists believe that the matter causing the black hole has collapsed to a point at the centre, and the rest of the ball is empty of matter.

The bad news is that neither general relativity nor quantum mechanics can describe what is really at the center. The good news is that there appears to be no upper limit to how big black holes can get. The largest one found so far is 21 billion times heavier than the Sun, and became that way by having billions of stars fall into it. I say it’s good because we have no idea what might happen to the universe if a black hole got too big!”

Host’s comments:  So a black hole is an extremely dense collection of neutrons? How does gravity attract neutrons?

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.