the future of programming
or the future of Human-Computer Interaction in general
|Jacob Chapman||Oct 5, 2019|| 1|
I think the way software is written will fundamentally change over the next 20-40 years.
Even if it will be a long time before Neuralink is able to “write” directly into brain memory, I think it won’t be very long before “brain reading” devices are commercially available—and just having a one-way high-bandwidth path into the brain will change the way we interact with computers in a fundamental way.
In the future software programmers might say to the computer “Create new program” (obviously we will still talk to computers otherwise what was the point of 2012-2022) then the Neuralink software-creation program will read the brain sensing what kind of program you are thinking of making. It senses you are thinking about the sales spreadsheet that you were just working on because the synapse activation pattern is the most similar. You think about how you want to share the spreadsheet. So the Neuralink software-creation program offers to put the information into Google Sheet and share it with your employees. Oh, okay. Yeah that’s what I meant. Turns out I didn’t need to write custom software to manage the sales data—just use Google Sheet.
But what if the functionality you wanted was more novel?
So in this case the Neuralink software-creation program would go step by step with you and capture your thoughts about how you wanted the program to behave. It would display on screen some of the possible edge cases that it wants to get your input on. But the programming doesn’t really stop there. Let’s say you create a program and it looks good so you deploy it. It’s running on the AWS (american welding society cloud servers).
But what if the program crashes or it detects what could be an edge case. Maybe the probability for all behavior options in a particular scenario are all low so there is no clear path for the program to take. In this case the Neuralink master control program would pick your brain a little bit. Maybe you’re in the shower (or maybe asleep) so it’s a convenient time for the master control program to ask you which behavior option you’d like the program to take. It does this in a way similar to how we ask quantum computers questions. It will set up some synapses to ask you the question. Based on your preference it will get the result that it needs and rewrite the part of the program to be more fully aligned to how you want the program to behave.
Then maybe it sends you an email with the detailed scenario and the actions that it took and your conscious or subconscious response and you can indicate if that was correct behavior.