Tue. May 30th, 2023

Chappie is just the latest innovation in a humane technology story.

Long the domain of science fiction, researchers are now working to create software that perfectly models the brains of humans and animals. With an approach known as Whole Brain Emulation (WBE), the idea is that if we can perfectly copy the functional structure of the brain, we will create software that is perfectly analogous to one. The result here is simple yet mind-boggling. Scientists hope to create software that can, in theory, experience everything we experience: emotion, addiction, ambition, consciousness, and suffering.

“Right now in computer science, we’re doing computer simulations of neural networks to figure out how the brain works,” Anders Sandberg, a computational neuroscientist and research fellow at the Future of Humanity Institute at the University of Oxford, told Ars. “It seems possible that in a few decades we will take whole brains, scan them, turn them into computer code and make simulations of everything that happens in our brains.”

Everything. Of course, a perfect copy does not necessarily mean equivalent. Software is so… different. It’s a tool that performs because we tell it to perform. It’s hard to imagine that we could imbue it with the same abilities that we think make us human. To imagine our computers loving, starving, and suffering probably feels a bit ridiculous. And some scientists agree.

But there are others – scientists, futurists, Google’s technical director – who are seriously working to make this happen.

For now, let’s put aside any questions about if and when. Imagine if our understanding of the brain has expanded so much and our technology has become so amazing that this is our new reality: we humans have created conscious software. The question then is how to deal with it.

And while success in this endeavor to turn fantasy into fact is by no means guaranteed, there has been quite a bit of debate among those who ponder these things as to whether WBE’s will mean immortality for humans or the end of us. There is far fewer discussion about exactly how we should respond to this kind of artificial intelligence, should it appear. Shall we show a WBE human kindness or human cruelty – and does it really matter?

The ethics of pulling out an AI

In a recent article in the Journal of Experimental and Theoretical Artificial Intelligence, Sandberg delves into some of the ethical questions that would (or at least should) arise from successful whole brain emulation. The focus of his paper, he explained, is “What are we allowed to do with this simulated brain?” If we make a WBE that perfectly models a brain, can it suffer? Should we care?

Again, regardless of if and when, it is likely that an early successful software brain will mirror that of an animal. Animal brains are just much smaller, less complex and more available. So would a computer program that perfectly models an animal receive the same attention as a real animal? In practice this may not be a problem. For example, if the brain of a software animal mimics a worm or insect, there is little need to worry about the legal and moral status of the software. After all, even the strictest laboratory standards today place few restrictions on what researchers do with invertebrates. When dealing with the ethics of how to treat an AI, the real question is what happens when we program a mammal?

“If you imagine I’m in a lab, I’ll reach into a cage and squeeze the tail of a little lab rat. The rat will squeak. It’ll run away in pain, and it won’t be a big deal.” lucky rat. And actually, the regulations for animal research have a very strict view on that kind of behavior,’ says Sandberg. little rat simulation and pinch its tail? Is this as bad as doing this to a real rat?”

As Sandberg alluded to, there are codes of ethics for the treatment of mammals and animals are protected by laws designed to reduce suffering. Would digital laboratory animals be protected under the same rules? Well, according to Sandberg, one of the goals of developing this software is prevent the many ethical issues associated with using carbon-based animals.

To address these issues, Sandberg’s article takes the reader on a tour of how philosophers define animal ethics and our relationship with animals as sentient beings. These are not easy ideas to summarize. “Philosophers have been bickering over these issues for decades,” says Sandberg. “I think they’ll keep bickering until we upload a philosopher into a computer and ask him how he feels.”

While many people might reply, “Oh, it’s just software,” this seems far too simplistic for Sandberg. “We have no experience of not being flesh and blood, so the fact that we have no experience of software suffering might be that we haven’t had a chance to experience it. Perhaps there is such a thing as suffering, or something worse than software suffering can experience,” he says.

Ultimately, Sandberg argues that prevention is better than cure. He concludes that a cautious approach would be best, namely that WBEs “should be treated as the corresponding animal system without evidence to the contrary”. When asked what this proof would look like, i.e. software designed to model an animal brain without an animal’s consciousness, he also thought, “A simple case would be if the internal electrical activity didn’t look like what’s in the “A real animal. That would suggest that the simulation is not close at all. If there is a counterpart to an epileptic seizure, then we could also conclude that there is probably no consciousness, but now we are getting closer to something that could be worrisome,” he said.

So the evidence that the brain of the software animal is not conscious is… exactly like evidence that the brain of a biological animal is not conscious.

Virtual pain

Despite his pleas for caution, Sandberg isn’t advocating eliminating emulation experiments entirely. He thinks that if we think about it for a moment, compassion for digital laboratory animals can arise relatively easily. After all, if we know enough to create a digital brain capable of suffering, then we should also know enough to bypass the pain centers. “It could be possible to use virtual painkillers that are much better than real painkillers,” he says. “You’re literally leaving out the signals that correspond to pain. And while I’m not worried about simulations right now, I think that will change in a few years.”

This, of course, assumes that the only source of suffering in animals is pain. In that regard, it seems pointless to worry about whether a software animal will suffer in the future, given that we accept so much suffering in biological animals today. If you find a rat in your home, you are free to remove it at your discretion. We kill animals for food and fashion. Why worry about a software rat?

One answer – basic compassion aside – is that we will need the practice. If we can successfully mimic the brains of other mammals, then mimicking a human is inevitable. And the ethics of hosting humanoid consciousness gets much more complicated.

In addition to pain and suffering, Sandberg considers a long list of possible ethical issues in this scenario: a blindingly monotonous environment, corrupted or disabled emulations, continued hibernation, the tricky subject of copies, communication between creatures that think at vastly different speeds (software brains can easily million times faster than ours), privacy, and proprietary and intellectual property issues.

These could all be thorny issues, Sandberg predicts, but if we can solve them, human brain emulations could deliver remarkable feats. They are ideally suited for extreme tasks such as space exploration, where we may be able to beam them across the cosmos. And when it comes down to it, the digital versions of ourselves may be the only survivors in a biological extinction.

By akfire1

Leave a Reply

Your email address will not be published.