Tag Archives: AI

Book Review: All Systems Red: The Murderbot Diaries by Martha Wells

3 May

 

A SecUnit assigned to the exploratory group PreservationAux has a problem. Two in fact. As an android, it’s supposed to serve the small exploratory mission to which it has been assigned. The SecUnit’s entire function is to support the exploratory mission’s investigation of a local planetary environment that it has placed a bid on to look at. Androids like SecUnit are a safety precaution from the Company because, well, alien planets can be rather hostile. And of course, they are handy recording devices, too, for the Company that is. A mandatory helper and a spy for anyone looking to explore the wild frontier in space.

Given that planets are not monolithic single-biome worlds, having multiple teams from competing groups spread out across a newly found world is a pretty regular thing. Who knows what you will find over in the next valley, down the river a bit, from another team. One team can’t find everything on a planet.  So when a neighboring team to SecUnit’s goes dark, that’s a bad sign for its team, a major concern.  What disaster befell them? Environmental? Natural? Something else? Given the proximity, is it a threat to PreservationAux, and to SecUnit itself?

The other problem is a more personal one. The SecUnit has managed to hack its own governor module, make itself independent, autonomous and capable of disobeying orders. It’s not going to reveal this of course, for fear of termination and worse, but this SecUnit is new to the idea of being able to make decisions for itself. New to the concept of being able to do what it wants to do. New to trying to come to terms with its own identity.

Like for example, a designation for itself. A name. Inside, secretly, the self-hacked SecUnit calls itself … Murderbot. Continue reading

The Intersection: AI and Creator-bias

19 Apr

Today’s post isn’t about science fiction exactly, but we’ll file it under “thoughts that inspire science fiction” and vice versa.

Ask a professional scientist if observer bias exists, and they’ll say yes. Medical science alone has many examples of what happens when bias is ignored. It affects medical practice in dangerous ways. Until recently, drug testing was almost never conducted on women. The reasoning was that women have “hormone fluctuations,” and the male-dominated medical industry wanted a pure data-baseline. Society believes that male is default for human. So, the establishment assumed that whatever is safe for men is safe for women and never looked back. Of course, the failure in logic here is that if a drug’s effectiveness is adulterated enough by female hormone fluctuations that it alters the end data, how could they have missed that this also meant this interaction could change its efficacy on the patient? Or to put it another way: How could they possibly know whether or not the drugs were, in fact, safe for women if the drugs aren’t tested under conditions with shifting hormones — the very conditions under which the drug was being used? This isn’t the only example.[1] And medicine isn’t the only science to suffer because of unexamined bias.

And here is where we begin our discussion of AI. Continue reading

Nature Magazine: No Humans Allowed (Plus a Question For Listeners)

10 Sep

Have you heard?  NPG, the folks behind Nature, the scientific journal, have banned Homo sapiens from submitting to their magazine:

To the dismay of many (yet to the delight of a few), Nature Publishing Group announced today that its flagship journal, Nature, will no longer accept submissions from humans (Homo sapiens). The new policy, which has been under editorial consideration for many years, was sparked by a growing sentiment in the scientific community that the heuristics and biases inherent in human decision-making preclude them from conducting reliable science. In an ironic twist of fate, the species has impeached itself by thorough research on its own shortcomings.

The ban takes effect on 12 September and will apply to those who self-identify as human. Authors will be required to include, in addition to the usual declaration of competing financial interests, the names of all humans consulted in preparation of the submitted work. Other journals are likely to adopt a similar policy.

Of course, the above is all a bit of humor, but can you blame them?  When you read the whole thing, it starts to make a lot of sense.  Why are humans doing all the science?  We’re faulty fleshbags, after all!

But the real question is this:

Will we ever see a future in which machines/robots/half-humans/non-humans do all of the science for us?

I suspect yes, but it probably won’t be in my lifetime.  Non-humans have been playing a major role in science for a long time, but humans have always been needed to parse out the details.  We have to do the interpretation.  But our reign will be short lived.  Eventually someone will invent an AI or robot or not-quite-human who can do roughly the same work — only better.  That will be an interesting day, no?

%d bloggers like this: