on the artiface of intelligence…

Had one of those early-morning, overly vivid dreams: Two pairs of perfectly human-looking robots (they were ‘robots’ in the dream, not ‘androids’, so let’s use that word here). And these were Manichaean robots, apparently: One pair felt ‘good’– or at least I was supposed to feel more sympathetic towards them– and the other pair were ‘bad’.  No particular reason for the distinction was given, it was Just So.

Actions took place in an old house that was also, somehow, a laboratory environment. Right down to a few old white grey-haired men in white lab coats wandering around the periphery, clipboards and all. Something was ‘out in the car’, and the two robot-teams were ordered to compete, Hunger Games style, to retrieve the item (again, never specified– though I vaguely remember thinking it was a book or a scroll, locked in a car out in the parking lot).

The end was inevitable: The ‘good’ robots dutifully started running towards the open door. The ‘bad’ robots just grabbed them from behind, picked them up over their heads, and smashed the ‘good’ robots all-too-human faces into bits up against the top of the door frame.

That said:  Neural networks, like mini-skirts, sideburns and bell-bottoms, come in and out of fashion roughly every twenty years or so. The cycle is thus:

  1. It’s been a period of algorithmic stagnation, but damn… the hardware’s gotten a lot faster. What shall we do with it?
  2. Hey! Neural Networks! (Though maybe we should call it something else…)
  3. Great early successes. Even a few new applications. Wonder why these work so well on these problems?
  4. Clever people study the why, discover new algorithms, perhaps even new classes of algorithms…
  5. …which end up proving the old quip about “the second-best solution to any problem, once you know the first-best”.
  6. Hey, look: The hardware folks have been busy… We can run these new algorithms we’ve found much faster… let’s optimize!
  7. Possible optimizations eventually all get found. The field begins to stagnate…

Rinse, lather, repeat. But at least some interesting papers get generated.

Elon Musk went from warning us against AI in an open letter to becoming a co-founder of the OPENAI initiative in less than half a year. Obviously, his thoughts are his own, but I suspect he’s decided that the Rise of AI is inevitable, and that rather than warn against or try to prevent it, he can at least help ensure that the forefront of the research take place out in public (as it used to in bygone ages), and not in the shadowy halls of some Deep Private research lab (i.e. companies: some known, some unknown, but None To Be Named).

No one would ever accuse me of optimism, but I don’t worry much about rampant AIs taking over the planet through Terminator-style military force, or converting Earth’s surface into a puddle of grey goo. The thermodynamics of it just don’t work out: Our meat-brains are highly energy-efficient when compared to our current (and near future) computers. Yes, a computer can now beat you at Go. But it consumed several orders of magnitude more energy than you did to do so. And yes: it can also recognize your grandmother’s face, but it consumed tens of watts over a few billion instructions to do so. Your brain did it in less than 100 steps, consuming a few thousandths of a watt in the 1/2 second it took (while doing innumerable other things at the same time, I might add).

So: All dramatic competition aside, Thermodynamics Always Wins. Always. (Though that of course doesn’t preclude nasty things happening on the way to local optimum). So IMO we are ‘safe’ from that scenario, for now.

People should be more worried about the potential abuse of AI by Those Who Know Better. For now, the PTB are satisfied with using AI models to anticipate our desires  for monetization. But social media companies are already using AI to ‘shepherd’ mass opinion towards a certain flavor of bland, corporate centrism (sometimes at the behest of governments, sometimes on their own). How soon before each one of us has a virtual ‘minder’, that will watch our every move to form a complex model of our psychological internals so accurate that it could not only anticipate our responses, not only control our responses– but eventually the man-in-the-loop becomes… unnecessary. Or at least atrophies into nothing more than the slimy rock or rotted tree stump upon which something else lives and grows.

Once you’ve been successfully modeled, you can be successfully replaced. Think of it that way.

So no, giant killer robots aren’t going to autonomously slaughter us en masse, (unless of course they were ordered to by other humans– in which case, the problem was us, not them).  I worry more about (what’s left of) the Human melting away into a dank little whimper of a world that eventually dies of boredom from watching itself, like some mad self-feeding, world-sucking Silicon Ouroboros.

Here’s the tl;dr version: Your brain is not a fucking computer… Please stop acting like it is.

2 thoughts on “on the artiface of intelligence…

  1. Hammer

    Moreover, only the Western,( or Northern) system could melt away in boredom. The rest of the world is too busy getting on with in,( like, say Africa). We taint them, sure, but they are forging something new, and it is anything but boring.

    Like

    Reply

Leave a comment