Our OS’ Defective Access Control

There is a bug in Linux Steam that causes every single file owned by a user account to be removed from the system.

permissionsThis sounds familiar, in 2007 I wrote a post about OS security compartments and the defective reasoning of per-user file access security in operating systems. For years, especially in the Linux/Unix world, we have been treating desktop user files as if they have no value when in fact they are among the most important digital assets that need to be protected. That’s why I called for a per-app file access model that requires explicit feedback from users before their files can be read or changed by an app.

OS X has actually been moving in the right direction here with applications that are installed via the Apple App Store for the Mac. Regrettably though, App Store apps don’t allow for finer-grained security privileges that are user controllable.

Per-app access control for user files would not even have to be intrusive, since most of consent-requiring actions could be coupled to open and save file dialogs. Operating system vendors just need to do it.

The State of Whole Brain Emulation in 2015

When viewed most fundamentally, the brain is an information processing device. Human brains excel not only at performing higher tasks, but they do so by employing a myriad information processing techniques that we are only just discovering right now with the second cultural advent of Machine Learning. Organized clumps of neurons perform a lot of computation using comparatively little energy, too: the typical human brain uses between 20 and 40 Watts of energy to do all of its information processing.

3D reconstruction of the brain and eyes from CT scanned DICOM images. by Dale Mahalko
3D reconstruction of the brain and eyes from CT scanned DICOM images. (source: Dale Mahalko)

Yet for all its capabilities, owning a biological brain comes at a steep cost. There are countless ways in which parts of a brain, and at some point inevitably the whole brain, cease to function – this is what we call death; and for patients with brain injuries such as strokes, death indeed comes in episodes, or even gradually as is the case in neuro-degenerative diseases such as Alzheimer’s.

The nature of biological death is two-fold: first, the hardware ceases to function, so information processing stops. If only parts of the brain stop functioning, you might experience loss of sensory information, motor control, memory. Every single function making up a person can fail in this fashion. There is a spectrum of failures clinically observable, ranging from no noticeable outage up to complete loss of consciousness.

The second aspect of death is the destruction of the apparatus that contains and processes information. In CS terms, not only does information processing stop, but the infrastructure necessary to run these processes is lost. Contrary to classical information technology, hardware and software are not entirely separate in neurobiology.

CT-scan of the brain with a middle cerebral artery infarct, region of cell death appears darker than healthy tissue. (source: http://commons.wikimedia.org/wiki/File:MCA_Territory_Infarct.svg)
CT-scan of the brain with a massive middle cerebral artery infarct, region of cell death appears darker than healthy tissue. (http://commons.wikimedia.org/wiki/File:MCA_Territory_Infarct.svg)

In a lot of ways, the hardware offered to us by biochemistry is capable of amazing feats. Our neuronal architecture is excellent at performing statistical data processing, which incidentally is a big portion of what’s required to make sense of the world around us. In contrast, silicon-based computers excel at running deterministic operations, such as calculations and stringent logical reasoning. Both architectures can emulate the other, though. Human brains are Turing-complete and can perform any action that can be performed by a computer. We may not be as good or as fast, but we can do it in principle. Likewise, computers can perform the types of operations predominant in our brains, but again not as quickly as a blob of living matter might. The important point is that these two architectures are compatible in principle.

Given the capabilities and drawbacks of each, biological and synthetic information processing, it makes sense to aim for a fusion product of both. What if we could transpose our minds onto a less tenuous non-biological substrate? The idea to combine classical and biological computing in order to overcome the limitations of both is not new. The benefits would be immeasurable and instantaneous: gaining the ability to make backups of minds, and an untold potential for further growth and development.

So, given the obvious advantage of cheating death, why are we not living in silico by now?

Step 1 – Extracting the Information

Golgi-stained neurons from somatosensory cortex in the macaque monkey, from brainmaps.org
Golgi-stained neurons from somatosensory cortex in the macaque monkey, from brainmaps.org

← This is what the hardware of the brain looks like, at the neuron level. You might be tempted to think of a neuron as the biological equivalent of a transistor or a memory circuit, and it certainly does have some of these properties, but the most important difference to recognize is that there is a huge multitude of different neurons. They come in a lot of different shapes and models, and each neuron is configured differently.

In a classical computer, the information it contains, the software that processes the information, and the hardware that enables the programs to run, are all separate facilities. In the brain, however, all these are linked. The information stored in a single neuron is linked to its working configuration.

In order to transition a mind from working in vivo to a virtual substrate, we need to copy its essence from the biological clump of matter. This means extracting all of the structure, the neuronal configuration in its entirety. Each neuron has connections to other neurons, so we need to capture those connections. Neurons operate on different chemical models, so we need to get the neuron type as well. Furthermore, neuronal behavior is often modified individually by complex proteins, we need to know these too. Oh, and by the way, the cells surrounding neurons (such as astrocytes) perform computing tasks as well, need to scan them as well.

Pyramidal neuron from the hippocampus, stained for green fluorescent protein - Wei-Chung Allen Lee, Hayden Huang, Guoping Feng, Joshua R. Sanes, Emery N. Brown, Peter T. So, Elly Nedivi - Dynamic Remodeling of Dendritic Arbors in GABAergic Interneurons of Adult Visual Cortex. Lee WCA, Huang H, Feng G, Sanes JR, Brown EN, et al. PLoS Biology Vol. 4, No. 2, e29. doi:10.1371/journal.pbio.0040029, Figure 6f, slightly altered (plus scalebar, minus letter "f".)
Pyramidal neuron from the hippocampus, stained for green fluorescent protein

You can see getting all this information, in some cases down to the molecule, is extraordinarily difficult. In the cortex slide presented above, you can just about make out the connections between the neurons. Given thin-enough slices of the entire brain, we might just be able to reconstruct those connections into a computer model with today’s technology. However, we are far from getting the other information I mentioned. Identifying patterns in optical microscopy requires the use of staining agents, and there is a limit to the number of useful staining that can be applied to a given sample – so this is never going to be detailed enough. Electron microscopy might do it, but we’d need some serious post-processing to identify the presence of important proteins in a cell. On top of that, whole-brain EM scans would be a logistical impossibility considering today’s hardware.

 Serial sectioning of a brain - http://en.wikipedia.org/wiki/File:User-FastFission-brain.gif
Large-scale scan of human brain

Right now we are certainly nowhere near the point where we can make usable electron microscope slides of an entire human brain. This will probably change as we make progress in image processing AI. Ideally, this process would be an automated destructive scan where are brain is placed in a machine that sequentially ablates layers of cells and takes high resolution EM pictures of each layer.

Ideally, it would include not only the neocortex but the whole brain as well as the medulla – or even the whole body if feasible. While we are primarily interested in capturing the higher level functions of the neocortex, we also need knowledge about the wiring at the periphery. Gathering a whole body picture will enable us to make sense of the circuitry more easily, even if we end up throwing most of the data away. It is likely sufficient to use ordinary LM scans in order to capture body data. I am not aware of any project aimed at creating a cybernetic simulation of physiological systems from whole-body microtomes, but it seems like this would be a necessary prerequisite for brain emulation.

So how are we doing on this front, in 2015? We are now routinely using microscopic imaging to make neural models, but since we are still in the basic research phase we’re only doing it for generalized cases. At this point, I am not aware of any effort to capture the configuration of a specific brain for the purpose of emulating its contents. The Whole Brain Project has put out the Whole Brain Catalog, an open source large-scale catalog of the mouse brain – but detailed information about neuronal connections is hard to come by. We are still working on a map of a generic Drosophila connectome, so capturing a mammalian brain’s configuration seems as far off as ever. On the other hand, proactive patients are already generating 3D models of cancerous masses obtained by MRIs, so there is certainly hope that technological convergence will speed up this kind of data gathering and modeling in the near future.

Step 2 – Making Sense of the Information

Suppose we managed to extract all the pertinent structural and chemical information out of a brain, and we are now saddled with a big heap of data from that scan. What we need to do with in order to make that mind “run” on a virtual platform depends largely on the type of emulation we have in mind.

It’s all about detail. There are simulations in biology that aim to accurately depict what goes on in a cell at a molecular level. Here, interactions between single proteins are simulated on a supercomputer, requiring massive amounts of memory and processing power. If we were to “plug in” detailed brain scan data, we could do so relatively easily without a lot of conversion: for every molecule identified in the scan, we’d simply put its virtual counterpart into the simulation. However, simulating even a few single neurons in this fashion would quickly take up all the processing power of a whole supercomputing facility. This is obviously not practical.

Investigation of the Josephin Domain Protein-Protein Interaction by Molecular Dynamics
Investigation of the Josephin Domain Protein-Protein Interaction by Molecular Dynamics – detailing a process in Sinocerebellar ataxia (SCA)

The solution is to look at the outcome of those molecular interactions. It turns out, the products of chemical processes are relatively regular and dependable. Given the right conditions, 2 H2 and 1 O2 will always combine to form 2 H2O. We can use that observational knowledge of chemical processes and make a straight-forward mathematical model of the expected behavior of a neuron – and then we can run that simplified model on a computer very easily.

This means we can solve the computing power issue by using smarter mathematical stand-ins for chemical processes. But now we have two problems: how much can we simplify neuronal behavior and still get enough fidelity to run a human mind without any perceptible loss? And how do we translate the data from our scan into a representation that is faithful to the original yet yields itself to relatively efficient computability?

The best answer from the view point of today’s knowledge about neuronal information processing may be that we should choose a detail level that emulates the behavior of cortical columns plus maybe some carefully-chosen single neurons. Cortical columns are great to emulate because they provide units of functionality with an abstraction level high enough to be easily computable yet still low enough to reflect rich detail, although it is presently true that given an EM scan of a single column element (or neuron for that matter) we would not have enough knowledge about its individual function to accurately translate it into a digital representation. But we’re working on it.

Cajal Blue Brain: Magerit supercomputer (CeSViMa)
Cajal Blue Brain: Magerit supercomputer (CeSViMa) – The system maintains the cluster architecture with 245 PS702 nodes, each one with 16 cores in two 64-bit processors POWER7 (eight cores each) 3.0 GHz, 32 GB of RAM and 300 GB of local hard disk. Each core provides 18.38 Gflops.

The Blue Brain Project aims to reverse-engineer mammalian brains and then simulate them at a molecular level. This momentous effort has yielded a lot of detailed knowledge about how neurons and cortical columns work, and how they can be simulated. However, the project is occupied with basic research and simulates cellular processes in high detail. While the results generated by it are essential, this is not an effort that allows us to meaningfully run entire minds on a computer – something to keep in mind when reading press reports about the Blue Brain Project.

Step 3 – Running Minds in silico

So we have found a way to digitize brains, translate the information from that scan into a representation that can run efficiently on a classical computer, what happens when we actually execute that code?

Irmgard D. Dietzel, Sivaraj Mohanasundaram, Vanessa Niederkinkhaus, Gerd Hoffmann, Jens W. Meyer, Christoph Reiners, Christiana Blasl and Katharina Bohr (2012). Thyroid Hormone Effects on Sensory Perception, Mental Speed, Neuronal Excitability and Ion Channel Regulation, Thyroid Hormone, Dr. N.K. Agrawal (Ed.), ISBN: 978-953-51-0678-4, InTech, DOI: 10.5772/48310. Available from: http://www.intechopen.com/books/thyroid-hormone/thyroid-hormone-effects-on-sensory-perception-mental-speed-neuronal-excitability-and-ion-channel-reg
Thyroid Hormone Effects on Sensory Perception, Mental Speed, Neuronal Excitability and Ion Channel Regulation

Compared to the steps before, this one is relatively easy. Once we found a good and efficient model framework that can run a digital representation of a brain efficiently, this functional core needs to be executed in a digital milieu that provides connectivity to (emulated) peripheral sensory and motor neurons, as well as a simulated body chemistry. In order to run a brain, we’ll need a functioning endocrine system as well. While we know how to do this in principle from cybernetic models, there are of course still some knowledge gaps to fill as to the management and representation of a virtual body’s state.

Discussions still rage about the feasibility of mind uploading. From my perspective, there are massive technological and scientific impediments still to overcome but nothing in particular seems to prevent this development from playing out.

Some researchers dismissively address the prohibitive computational loads required to run a full-scale simulation of a brain, but the verdict is still not in about methods that emulate higher-level structures such as cortical columns efficiently. It seems to me that once basic research provides useful mathematical abstractions about the behavior of brain components, there is no reason why biology and classical information processing could not meet half way at a point where computation does become feasible at scale.

Moving Forward

We are at an interesting junction in our technological and scientific development. Computational resources are comparatively cheap, we are in the midst of a new wave of AI algorithms allowing for more sophisticated data processing, and there are a lot of interdisciplinary scientists and engineers who could work on this.

However, there is a big problem. Aside from a few laudable exceptions, research data is not available to the public at large. Heck, it’s not even available to competing research institutions. Considering how the internet was once envisioned as a medium for publishing and interlinking research data, this is still one of its unfulfilled promises. Press releases about discoveries made by well-funded projects often lure us as a civilization into a false sense of accomplishment, because more often than not the specifics of those discoveries remain inaccessible.

It is easy to fall victim to the misconception that whenever, say, the Blue Brain Project puts out another press release, we are moving closer to moving our brains into silicon. This is not true. Access to basic research data is tremendously restricted and, no matter how press releases are worded, the scientists mentioned rarely actually work on or towards this specific goal. For the most part, veiled allusions to mind uploading are merely used as convenient science fiction references to generate public buy-in. Pharmacology is what pays the bills, not pie-in-the-sky mind uploading.

Liberating Research Data

It is easy to see that we could be on the threshold of a golden age of citizen science, potentially increasing our overall science and engineering output in an unprecedented way. Access to cheap high tech, 3D printing and modeling, and the infrastructure for rapid information interchange is in place. All we need now is access to the actual body of human knowledge. Not the summarized form that’s in Wikipedia, but actual research data, including both free access to papers and publications, but also – and this might be an even more problematic selling point – access to the raw data as well.

If we could convince a critical mass of research groups to go fully open source, humanity as a whole stands to make the next big leaps. However, if this open sourcing does not happen, research will remain in walled gardens and it will move along the very predictable paths of carefully incremented progress – enough to get a competitive edge in pharma, but insufficient to upset the status quo.

And make no mistake, brain emulation, as any other radical endeavor, is all about upsetting the status quo. Because of this fringe component, progress in this area will likely come from outside of big-budget research facilities. It may even make progress based on the efforts of hobbyists – such as biomedical researchers engaging in side projects. The question becomes first and foremost, what can we do to enable them?


Attribution

  • Wei-Chung Allen Lee, Hayden Huang, Guoping Feng, Joshua R. Sanes, Emery N. Brown, Peter T. So, Elly NediviDynamic Remodeling of Dendritic Arbors in GABAergic Interneurons of Adult Visual Cortex. Lee WCA, Huang H, Feng G, Sanes JR, Brown EN, et al. PLoS Biology Vol. 4, No. 2, e29. doi:10.1371/journal.pbio.0040029, Figure 6f, slightly altered (plus scalebar, minus letter “f”.)
  • Irmgard D. Dietzel, Sivaraj Mohanasundaram, Vanessa Niederkinkhaus, Gerd Hoffmann, Jens W. Meyer, Christoph Reiners, Christiana Blasl and Katharina Bohr (2012). Thyroid Hormone Effects on Sensory Perception, Mental Speed, Neuronal Excitability and Ion Channel Regulation, Thyroid Hormone, Dr. N.K. Agrawal (Ed.), ISBN: 978-953-51-0678-4, InTech, DOI: 10.5772/48310. Available from: http://www.intechopen.com/books/thyroid-hormone/thyroid-hormone-effects-on-sensory-perception-mental-speed-neuronal-excitability-and-ion-channel-reg
  • Power of a Human Brain – The Physics Factbook Edited by Glenn Elert — Written by his students – http://hypertextbook.com/facts/2001/JacquelineLing.shtml
  • Investigation of the Josephin Domain Protein-Protein Interaction by Molecular Dynamics – from Deriu M, Grasso G, Licandro G, Danani A, Gallo D, Tuszynski J, Morbiducci U (2014). “Investigation of the Josephin Domain Protein-Protein Interaction by Molecular Dynamics”. PLOS ONE. DOI:10.1371/journal.pone.0108677. PMID 25268243. PMC: 4182536.

Getting an SSL Certificate: SSLS.com vs. StartCom

I’m switching rolz.org from a polling based “realtime” interface to Websockets, like I should have done a long time ago. Recently, Cloudflare added a free SSL terminator option to their offering, and I jumped onto that with Rolz – but CF doesn’t do Websockets in the free tier, which is understandable. Since Rolz can be kind of high traffic, and I do want to go SSL on every web project that has user accounts, dropping SSL and/or Cloudflare was not really an option.

So the solution is to serve Websockets connections from a subdomain, but that means I’ll have to get my own SSL certificate for the WS server as well. In the past I dabbled with SSL certificates, but inevitably gave up because managing, configuring, and renewing them was always such a hassle.

I do not want to support companies who make money from charging outrageous sums for SSL certificates, so I turned to StartCom early on. Their UI is basic, but essentially fool-proof and it works. This is what I was going to do this time as well, but I ran into trouble straight away. My account was locked and under review, as happens so often in today’s artificially localized internet when you’re using IP addresses from different countries a few times in a row. Yes, I’m looking at you Facebook, Google, Twitter…

Looks suspicious, but works pretty well
Looks suspicious, but works pretty well

Anyway, jumping through not one but two hoops at StartCom was not as bad if they didn’t require you to wait until your account gets approved by a human, not once, but twice. Waiting periods are where users jump ship. And so did I.

One of the options for reasonably priced SSL certificates is SSLS.com, a meta-sales site that sounds and looks extremely shady, but turns out to be legitimate.

I jumped on one of the basic SSL offerings, which gives you a certificate for the root domain and a subdomain, through an ordering process where they pass on users through to the actual company issuing the certificate. You can grab such a certificate for about 8 bucks, which I did. The ordering and admin process was minimal and went without a hitch. However, I should have read the fine print, because the one subdomain they sign for is automatically “www”. This was useless to me, which was a bit frustrating. Still, not a bad user experience on the whole, just my own stupidity for ordering something that didn’t work for me.

not as pretty, but very useful
not as pretty, but very useful

Back to StartCom! In the mean time they approved my account (again), and they do let people decide what the one included subdomain should be. Very clever and useful. Good job, StartCom, on being thoughtful about this.

In case you’re wondering, installing a custom certificate with Nginx is extremely straight forward. Just put your CSR key file some place safe, and along side it create a single file by chaining your own certificate and any of your CA certificates together. You can then refer to both files from your /etc/nginx/nginx.conf, which looks like this if you’re using NginX as an SSL terminating proxy that passes requests along to the actual Websockets server:

server {
      listen      443 ssl;
      server_name <subdomain.domain.com>;
      ssl_certificate <your certificate>;
      ssl_certificate_key <your CSR key>;
      location /<your WS location>/ {
        proxy_pass http://127.0.0.1:<your internal WS server port>;
        include <your standard websocket config>;
      }
      include <your standard server config>;
    }

Home Overlord

HomeOverlord

HomeOverlord is a simple web-based home automation interface for HomeEasy (HE853) and HomeMatic (CUL/Homegear)

This is (for the time being) the main screen of HomeOverlord, the panel where you can control devices directly. The UI switches automatically between a day and night color scheme. Beyond that, HomeOverlord provides a neat system of event triggers to make your little device minions do whatever you want, behind the scenes.

Beware!

At this stage, this is a project that runs for me, but it’s not really designed to be portable to other homes. In theory, it might work. It might not. The software is designed to work with the HomeEasy HE853 USB stick to address HomeEasy devices, and the CUL using the Homegear XMLRPC interface to communicate with HomeMatic devices. To be a full home automation solution, features are still missing. For example, right now you have to do HomeMatic pairing with the (albeit browser-based) command line interface. I run the software on a Raspberry Pi, in theory it should work well on pretty much any architecture that supports those USB devices. For reference, I included my current home configuration verbatim. Also, there is no installer. Continue reading Home Overlord

Bash Scripts for Making Screenshot Timelapse Movies on OS X

timelapse-scripts

Bash scripts for making screen shot time lapse movies on OS X.

Screen Capture

The script capture-screens.sh grabs the actual screen content. Open it in a text editor to change its settings. By default, it takes a JPG screenshot off the main screen every second and puts it into the folder Downloads/screencaps/.

You can stop the capture process any time by hitting CTRL-c, and resume by just starting the script again. The naming convention of the capture files contain the timestamp at the moment of start, so at the end the movie frames will be in order. Because of this, you can also combine captured frames from different computers for example if you alternated between your laptop and your desktop.

Caveat: please check that files are actually being produced in the target directory as the script is running. You don’t want to discover, after you’re done with everything, that nothing got recorded.

Preparing for Movie Generation

After recording the time-lapse, it’s time to generate a movie out of it. For this, you’ll need the ffmpeg command-line tool. Most Macs should already have it, but if you don’t you can just download it with Homebrew.

The movie generation has two steps: first sorting through all the captured frames, and finally encoding the movie. Start the sorting operation by launching the script capture-preparemovie.sh.

This will put a symbolic link to every frame into the folder ~/Downloads/screencaps_temp/.

Make the Movie

To launch the encode, start the script capture-makemovie.sh. You’ll see some updates on screen as the movie is being made. If you see any error messages, it’s likely you have captured images with different sizes (for example, if they come from different computers) – in that case, put the differing frames away and encode them later into a second movie.

At the end, a new movie file called ~/Downloads/screencaps.mp4 should appear. After a quick check that it came out OK you can delete the source folders ~/Downloads/screencaps/ and ~/Downloads/screencaps_temp/.

Websocket Message Broker Boilerplate

WSBrokerBoilerplate

Web Sockets PHP > node > PHP basic setup

What’s this?

This is a collection of boilerplate/example files to set up a node.js Websockets server that acts as a message passer between browser clients and a PHP backend. With it you can implement chat servers and other realtime applications. The broker is designed to be a minimal, dumb server component, allowing the PHP backend to implement whatever logic is necessary.

Model

The expected setup for this is a Javascript client application (client-page.php), which talks to the Websockets broker (broker.js), which in turn talks to the PHP server backend (server/index.php). Message objects sent from the client to the broker are expected to be in JSON format, and are passed along to the server backend where the type field is used to invoke a corresponding command handler from the server/commands/ directory. Any output added to the $result variable by the command handler will be passed down again to the client. The backend server can also initiate the data flow by itself, using the internalCommandServer facility which can be reached using the brokerRequest cURL function defined in server/lib.php. Commands supported by the internalCommandServer are by default send and kick, further commands can be added to broker.js easily.

Boilerplate

This is supposed to be a collection of boilerplate files and structures to get Websocket projects going – it’s not a functional software package by itself. Example usages are contained in the basic code files. There is also an example Nginx configuration included.

Config

The file config.json contains all the configuration options necessary for the components to talk to each other, and as such the file is read by the example client, the broker, and the PHP server component. The file contains the configuration that I used to test the suite on my server, so you need to fill in your own paths, domain names, and port numbers.

Ludum Dare #31: Snowma’am

Take on the role of the formidable Snowma’am and defend the Light of Winter!

Well, you’re a magical snow witch who can crush her enemies by animating snow monsters, you know the drill ;) It’s a strategy/tower defense-style game. Turns last 3 seconds and advance automatically. Select a snow creature by clicking on it, then move it or attack things by clicking on the destination. Movement is restricted to one field at a time.

Keyboard shortcuts (optional):
P – pause game
A/D – select previous/next unit
S – select Snowma’am

As always, I appreciate comments more than votes :) And if you encounter a bug (which is very likely), please describe it in the comments below.Compatibility: didn’t test on IE or Opera, so beware. Due to compositing slowness, I disabled the falling snow effect on Firefox (you’ll have to use Chrome to see it). Minimum window size is about 1200×900 pixels.

This game was made solo by me for LD31, from scratch. I used Logic Pro X for the score, Audacity for sound editing, Pixelmator for graphics editing, the Terminal and Coda for code editing. Cinema4D for the 3D work. Libraries are jQuery and Howler.js – otherwise it’s a vanilla JS/CSS/HTML app.

Post-LD Changelog:
– updated web URL to use CDN (should load faster now)
– fixed an error that caused the heal spell not to work
– fixed a bug that caused the spell buttons not to update
– decreased the round timer by half

Downloads and Links

The Basilisk Is a Lie

BasiliskA thought experiment known as Roko’s basilisk escaped from the dungeons of LessWrong has recently been causing a wave, mostly among fans of sensationalist headlines. The core proposition can be paraphrased like this:

In the future there will be an ethical AI that punishes everyone who knew they could have but in practice did not work towards its eventual birth. If the humans in question are deceased by then, a simulation of their minds will be punished instead. This is a moral action, because due to the AI’s capabilities, every day that passes on Earth without the AI is a day of unimaginable suffering and death which could have been prevented.

Now I am way less prone to ballsy absolutist assertions than practically anyone frequenting LessWrong, but this whole thing is wrong on many levels.

Ethics
The central argument about the ethical validity of this punishment scheme is beyond questionable, specifically the motivation of the AI. At the point where the AI achieves this capability, the assertion that the execution of the punishment is morally imperative is mistaken. By that time, nothing is actually achieved by carrying it out. The behavior of those “guilty” will not change retroactively. Since their future behavior is also irrelevant, the argument rests on the assumption that without the prospect of punishment there would have been no motivation for humans to develop the AI. While this is false in itself, punishment after the fact without the hope of achieving any effect besides the imposition of suffering cannot ever be an ethical act. Ethics aside, the contributions of individuals not directly connected to the eventual birth of the AI would be murky to judge as well. What’s the correct “punishment” for a computer scientist, as compared to a medical doctor?

Draw_this_birthday_cake.svgFeasibility
While there is little uncertainty that general AI is feasible and, if we continue on the path of scientific discovery, unavoidable – significant doubts exist about the nature of that AI. If this thought experiment shows nothing else, it does illustrate that our notions of what constitutes a “friendly” AI are wildly divergent. One can only hope for the sake of whatever becomes of humanity as well as the AI’s sanity that reading LessWrong will be one of its less formative experiences.

Where feasibility deserves to be harshly questioned is the simulation idea this Basilisk concept relies on to carry out its punitive actions. The most central assumption here is that a mind reconstructed from extremely lossy data fragments is still the absolute (!) equivalent of its original version.

That means at the core of this is a belief that if I were to die tomorrow, and my mind was being reconstituted from nothing but my old Amazon shopping lists, this would be the same as me.

It should be very obvious that this is not true, but to make matters worse my “sameness” value not a Boolean. It’s not even a scalar value, it would have to be a vector spanning a lot of aspects, each measuring how much of the original mind was successfully transferred. It is disconcerting that this basic notion is not being shared by the rationalist movement. Instead, it is apparently considered feasible to reconstruct any specific thing by using deduction from first principles.

Inevitability
The sheer amount of models and parameters that could lead to the development of general artificial intelligence is huge and in its entirety inconceivable. While it is still appropriate to engage in informed speculation, one should be skeptical whenever certain models and parameters are cherry-picked and arranged just so, in order to illustrate a thought experiment that is then deemed to be an inevitable outcome. This reduces technical complexity and historical uncertainty to an absurdly simplified outcome which is simply taken as fate.

Already, a big number of AGI scenarios have become intellectual mainstream, some of which claim exclusivity for themselves. Some go further and assert inevitability. Otherwise rational people can come to these conclusions of inescapable future outcomes because they are losing sight of the complexity of factors and conditions their reasoning is based on. No statistician would chain together a list of events with 80% assumed probability each and claim the end product is a matter of destiny. Yet, for some reason, futurists do this.

It is reasonable that a number of these scenarios might eventually play out, with some variation, and in some order. They obviously can’t all be true at the same point in time and space, including the Basilisk.

Of course that also means, just because nothing in principle prevents it, somewhere in the universe Basilisks may well exist already. But there is no reason to assume it has to on Earth. It would take a special cocktail of circumstances.

A Modern Pascal’s Wager
The core argument why this idea is perceived as dangerous is that people who understand it will be forced to act on it. This means acting out of fear of future punishment, just in case there is an invisible entity out there who cares enough about your actions. Even if you accept this premise, and even if you’re deluded into thinking this is the path to an ethical life, the huge problem is predicting what that entity wants you to do so you can avoid punishment.

This is the definition of a problem where you do not have enough information to make an informed decision. In the absence of any information about that deity, acting on its behalf is an execution of random fantasy.

The claim behind the Basilisk is again one of inescapable certainty, in fact it desperately relies on that property. Because you supposedly know what the Basilisk wants – it wants to exist – this is seen as a solution to the “unknownable deity problem”. However, this only works if you believe in the properties of Roko’s basilisk dogmatically, disregarding all other AI futures. This is in fact the exact analogue to the original Pascal’s Wager where the not-so-hidden assumption was that the Christian fundamentalist god was the only one you had to please.

Of course, within the context of an AI that can simulate any person, this is all moot. There is nothing preventing said AI from simulating you in any set of circumstances, including perpetual punishment or everlasting bliss. In fact, there is no real cost to simulating you in a million different scenarios all at once. Acting out a random fantasy based on the off chance that in the future one of your myriad possible simulations will have a bad day is not rational.

Some of the reasoning on display here seems to mistake blunt over-simplifications for clarity of thought. To an outsider like myself it looks like complex multivariate facts are constantly being coerced into Boolean values which are then chained together to assert certainties where none are really warranted. There is a certain blindness at work where everyone seems to forget the instabilities hidden within the reasoning stack they’re standing on. But what’s worse is that fundamentally unethical behavior (both on part of the AI and its believers) is being re-branded as the only ethical choice available.