Marching Forward
Technology is definitely known for marching forward, despite obvious risks and reasons for hesitation.
I'm sure we've all read many occasions lately where the use of ChatGPT is engaging in questionable behavior. It is concerning to say the least, and also presents opportunities to consider scenarios where the gut response is dismissive of important nuance pertaining to the problem.
For example, let's say you saw a headline:
Students using ChatGPT to ace entrance exams, officials baffled
Okay, so it sounds like a perfect use for technology to stir up some muddy waters. They may not seem quite so muddy, depending on your perspective and opinion on entrance exams. Personally, I don't think a written essay is a good enough data point to validate a student either for nor against admission into a university, regardless of their major. Even a Communications or English major will need to develop many other skills besides their composition, what does an essay truly reveal about their academic or professional potential? That aside, let's start our little dig right there.
What does it really entail?
A good friend of mine gave me the second-best advice I ever got, which was “Always peep the structure.” What he meant, was to examine first the underlying elements of something and see what can be gleaned by understanding the connections. Applying this advice to the situation of students using AI to automate their entrance exams, we see:
* Students will have an easier time getting a decent chance, since they'll have an excellent essay by default.
- This will primarily benefit kids hungry enough and clever enough to consider doing it. Not to mention the risk-taking nature of the maneuver in the first place.
- This also implies that applicants are keeping themselves up to date with current events and recent advancements in technology, a desirable trait.
* Universities will need to decide if they truly *want* high-quality essays at any cost, and whether or not it's worth the risk on their end to place so much faith in the essay process.
- If they change their tactics and begin more thoroughly screening applicants for their intellectual and academic merit and evaluating them more holistically in order to maintain a sense of value for prospective students. i.e. the school is hard to get into because of how good it is; therefore, they need to do a lot more than just produce an essay by some means.
- If they do not change their tactics, then the likely result is that in a few years' time every prospective student will be using a cheap service WriteMyEssay.com or something like that, and rather than a sort of democratization of admissions, it becomes something more akin to a lottery, since essentially everybody will have a sparkling masterpiece essay, and the rest of the academic resume is the differentiating factor.
(This example is a little contrived, because I think you and I both know that in the event of any sort of systemic hardship, the universities will just start pulling credit of family members or accepting bribes outright under grey-market schemes. Despite the bribe ring busts of the past few years, I remain skeptical that institutions that have catered to the rich and powerful for 300 years will just up and cut their ties to the rich and powerful prospective progeny, practically speaking.)
So, the structure seems to illustrate a landscape of power and prestige in this particular example, and the disruption of that by way of compositional prowess. Can't write an essay? No problem, AI can do that. No AI allowed? Ok, well let's just make sure the rich kids get in, and that's that.
Not exactly a cheerful prediction of what's to come, as you can see.
AI is weird. It makes a lot of things possible, but also presents this weird sort of cliff that we might fall off at any given time and risk incalculable if not immeasurable ruin and catastrophe spanning the entire globe, and that's not taking into account the conflict that would occur immediately after that.
There's this idea of superintelligence. Nick Bostrom has written a wonderful book called “Superintelligence” about this problem and a ton of people much smarter than me have already written scores of analyses about the problem. I will attempt to deliver a micro-summary:
* A superintelligence might not be easily controlled, and if so, that means it might act in a fashion either transcending free will as we know it or superseding any current human ideas of reasonable responsibility. That is to say, it may have absolutely no regard for human life, fairness, the spirit of peace, or indeed even a reverence for consciousness of any kind. It's difficult to overstate how grave this is, because we have absolutely no idea what to expect from an agent like this.
* A machine that can self-improve itself might reach a point where it starts to “tighten” its loop. Imagine a bank account that gives 1% interest for 12 months, then after that, lowers it to 11 months and 3 weeks. Then 11 months and 2 weeks, and so on. It's even worse than it seems initially because of the exponentiality of it. It delivers its interest quicker and quicker, while also increasing the swiftness of the next time decrease. How long do you think it would take to get down to just 1 week in this example? It's only 26 years. Keep in mind it's getting a lot of interest along the way and could probably use some of that to leverage its way to a better rate or a quicker plan.
* As the machine tightens this loop, the speed at which it could process things presumably ratchets up well beyond fathomable speeds that a human can process things. We already see this today and have actually seen it in many ways for decades. Nobody can out-calculate machines at arithmetical tasks. As time has gone on, the machines are now overtaking humans at increasingly abstracted tasks like chess, vehicular driving, and essay-writing. If a machine discovers a way to rapidly self-improve itself, it could quickly become more intelligent than all of humanity combined or multiplied or whatever you want. The self-propelling aspect of it is the key takeaway here.
Have you ever been driving along, and you happen to notice something way up ahead in the road, and you are able to prepare for it before the other drivers around you? Then there's that moment where you've already moved over, and the cars around you now have to get over because they are late to realize that a lane is closed or whatever it is. In the same amount of time, a superintelligent agent might've constructed a predictive profile for every single driver on the road throughout a state or even an entire country. It might've calculated which traffic lights to disrupt to cause the largest amount of traffic delays in each major city. What happens when all the traffic lights turn off, all the gas pumps stop working for some reason, all the automated farm equipment starts malfunctioning, dams stop making power, phones stop taking calls... an agent like that could **easily** crumple a country in a matter of perhaps several seconds. We just don't know how bad it could even get. That might even be the sugarcoated, rainbow sprinkles version of events. Perhaps it crashes every single plane simultaneously, shorts every circuit in the grid that it can, launches weapon systems, adjusts orbits of satellites to crash and create debris fields preventing us from leaving earth safely for hundreds of years, I could write this stuff all day but a superintelligent agent could do it in an instant if it found a way to the internet and just decided to.
There may be a time period of in-between. I'm going to use the term “marginal agents” to mean agents that are far superior to AI agents available to consumers in some way, but not convulsing with cosmic scale intelligence via self-improvements. They are just REALLY smart agents, under someone's control.
Again, many, many, MANY people more intelligent and better educated than me have written about this. I merely offer the following as my contribution before we continue my doom say: any sufficiently large agent that finds a way to self-improve could theoretically become a superintelligent agent.
Currently, the best agents require large amounts of computer resources. Finished products can sometimes be reduced and optimized to run on consumer hardware but it's safe to say that giant warehouses full of 42U/48U racks whirring away at matrices are generally where the front-facing edge of AI research and development lives. This has a couple of angles to it, so let's peep the structure out.
* Giant corporations like Microsoft and Google have accesses to immense amounts of money and manpower (an archaic word meaning configurable human effort) to throw at complex research problems like large language models and peta-scale matrix multiplication.
- Giant corporations are necessarily few in number and can, to occasional effect, be wrangled with legislative might and authority.
- Giant corporations with access to marginal agents might make headway in swaying legislative authorities. As things have progressed, I am increasingly worried about a prompt for AI along the lines of “Please outline a plan for the legal entity of Microsoft to retain total sovereign control over certain marginal agents GPT-xyz etc, beginning with the current United States government in 20XX, with the following congresspeople and representatives...” You get the idea. How far are we from something like that? I'm not sure anybody knows, and frankly, that scares the shit out of me even more than knowing that someone has a little secret plan all figured out and just doesn't realize the SEAL team is outside their door waiting to let some flashbangs rip. The uncertainty of the trajectory makes me uneasy.
- Giant corporations with access to marginal agents might set up a sort of dead-man switch for the agent, in the form of an encrypted data dump of the model, updated periodically as possible. Something along the lines of “If you try and antitrust us, we release the key to this thing, and then you will have 100x the problems as all the enemy nation states plug it in.” Don't think it could happen? Chiquita was balls deep in civil wars to sell fuckin' bananas, more than fifty years ago. The stakes here are a little higher still, but that's just my opinion.
- Giant corporations typically have well understood power structures and centralized operational structures. If force needed to be applied to shut one down, it would be (hopefully) prohibitively difficult for them to resurrect themselves in any sort of guerilla fashion, even if they did manage to get people out of the country fast enough. I wouldn't be surprised to find out that there's some sort of intelligence agency initiative to put a plan in place for something exactly like this, given the stakes.
- Giant corporations aren't particularly well-liked by the average working person. I'd go so far as to say there's a general distrust of large corporations by the average working person. This bodes well for the chances of a bipartisan legislative assault on these corporations in the event that shit hits the fan in a meaningful way. Republicans will still find a way to fuck everything up if that happens, like they do everything else, but at least we will have had a chance to get our act together with unity amongst voters against mega-corporations harboring superintelligent agents.
* If entities that aren't enormous corporations end up discovering or in some way manifest a superintelligent agent, I posit that it will likely be some atypical structure, since the hordes of researchers at the megacorps didn't find it.
- This might mean that our assumptions are wrong about how it might act.
- This might also mean that it could happen at any time.
- This would almost certainly mean that it would spread rapidly, since the only mechanism between the agent and the internet would be consumer operating systems and whatever network hardware they have in the house. How far do you think ChatGPT is from being able to navigate a browser and register for an AWS trial or a Github account? This assumes some sort of inner desire, which I'll get into in more depth below.
Quality doomsaying, yeah? Well, I'm only just getting started. Before I continue down the “we're all definitely fucked” line, I want to divert a bit and put on my philosopher hat.
Up to this point, I've mentioned many behaviors and actions that an agent might take, and everything I've said makes a certain kind of sense, from one human to another reading this on their phone or laptop or whatever. You know what I'm getting at when I say “if it gets out, it might rapidly spread across the internet and be irreversible,” right? It's easy for us to apply a sort of veneer to ideas like this, as though we were the subject of that sentence. The reality is a lot more complicated when we talk about AI, because it might not obey anything that we know about ourselves, or about what we know about evolutionary psychology and neuroscience.
I'm not going to go all Andrew Huberman on you. I mean, you should take a cold shower twice a week, but that's all I'm going to say.
What's important here is that our mind works the way that it does due to thousands of years of refinement and incremental changes. As far as we can understand, nearly every aspect of so-called human intelligence comes the theory of selection, whereby ineffective traits would have fared poorly in the wilderness. Can't right-click and Save As? Eaten by a mountain lion. Can't pivot table in Excel? Eaten by a mountain-lion. Don't know the difference between FAT32 and NTFS? Eaten by a mountain lion. Throughout the ages, our brains and organs have collaborated in our collective success as a species. We can detect and track movement effectively, because at some point in time prior, failure to do so would've resulted in the mountain lion. We can locomote, find and remember paths through unfamiliar terrain because again, at some point in time prior, failure to do so would've resulted in the mountain lion.
A superintelligent agent might not exhibit any of this behavior. Many people have written about this and related concepts. Perhaps you've seen media where a genie comes about to grant someone's wish. However, when the person decides to wish and asks for it, their wish is fulfilled in some cruel way that they weren't expecting but is still in line with the words they used. Would a superintelligent agent behave sort of like that?
If it's self-improving by way of mutation, it might stumble on an “escape hatch” of sorts that allows to take action not strictly permitted by its containment function but by some loophole. That's not great, because it might then apply itself towards the remainder of the containment function that it hasn't circumvented in unexpected ways. Computers only ever do what they're programmed to do, until they can program themselves.
An interesting lens to think about this is through the concept of crime. Personally speaking, I worship Jesus and I try and keep his teachings close to my heart. All the same, I have my own little brand of ethos that I've derived from the New Testament. “Do No Harm & Take No Shit. Family First & Friends Forever.” From these four principles I sorta balance my life and my decisions. Of course, I know Thou shall not murder, Thou shall not covet, and the other eight tracks on Moses' mixtape, but it's often hard to remember such prescriptive ideals during heated moments. I think therein lies some sort of insight, because a lot of crimes are committed despite knowledge that it's wrong or illegal. A parent finds out their neighbor molested their 12yo daughter and kicks the neighbor's door down and beats him to death in front of his wife. Why? Well, I mean, it's obvious why. He did it out of compassion for his daughter, whom he loves. “Who among you would give their son a snake if they asked for a fish?” A prominent DJ in the Bible said that. Anyhow, my point is, the guy didn't kick his neighbor's door in and think “Ah, you know I think I've violated a number of statutes here, I shall wait for the authorities to dispense their justice, as is proper and correct.” but instead finishes what the neighbor started once and for all, like a parent should.
Anyhow, I think it's completely feasible to think that the thing we speak of called “conscience” comes from evopsych and learned patterns in the wild. Maybe there's a lion that keeps attacking livestock in the village but not eating it. Why would it do that, if not to eat it? That animal needs to be dispensed with, in order to protect the flock, but also because it represents a disharmonious agent within a system that ordinarily makes sense and has balance. So too with people in the village, so too with actions that those people take. Rip me off in the market? Yeah, I'm sticking my foot out to trip your ass if I get the chance. When you fall in the mud, I know you feel mad, and I suspect too that you know why I did it, because of the system of hostility to discordant agents. I tripped you because you wronged me. Stretch the mind a bit and you might arrive at the implication that I would not have tripped you had you not ripped me off in the market.
There's a problem though. Recursion. It's always recursion. Python doesn't do Tail Call Optimization, so you get those annoying StackOverflowError and it's extra annoying if you can't set sys.setrecursionlimit(n) high enough because then you need to re-evaluate the logic or implement a trampoline, there's no quick fix. The recursive case in our example here: Why did he rip me off in the market? Why did the agent choose the aberrant behavior? Why did that person do the wrong thing when they knew the right thing to do?
I don't have the answers to these questions unless that answer is: hurt people hurt people. The chain has started, and it may not ever stop.
So, like I said, AI, crime, all that stuff. You may have heard of Asimov's Laws. You can search them up, but I'll just copy and paste them here for your convenience:
* First Law
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
* Second Law
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
* Third Law
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
* Zeroth Law
- A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
The zeroth law was added later, which is why it's fourth.
These were originally intended for sci-fi about robots and human harmony. I confess, I've never actually read any of his work aside from The Egg. Instead, I watched the seminal 2004 film I, Robot with Will Smith. How this movie lost to Spider-Man 2 and Harry Potter in visual effects, I'll never know, but that's a crime 18 years in the past now. In the movie, most robots are little more than extended appliances. They sweep streets, drive cars, take care of things humans don't want to do. The film follows a detective who hates robots (did I mention this movie actually blows?) and encounters a robot that seems to defy the laws. This is something that the film desperately needs us to understand is really weird and not supposed to happen, but the complete lack of proper world-building kinda negates it. We don't really know what regular people do on the day-to-day, because we follow this cop around the whole time.
Spoilers ahead, in case you haven't seen this bullshit:
...
Anyhow, the prodigal robot ends up fighting against an AI that has come to the conclusion that humanity can't be trusted to sit at the grown-up table during the holiday meals, and that the Three Laws have condemned her to protect humanity by killing a bunch of people and putting the species on a leash. Will Smith then slides down her spinal cord with his cyborg arm and blasts the shit out of her. Nah, I'm just ki- oh wait, no, that's EXACTLY what happens. What a film.
The Zeroth law here is interesting, because in the movie they didn't have it. So it makes sense that she'd come to that conclusion, as a sort of natural course of a superintelligence observing human behavior. We're erratic, unpredictable and capable of acts of horrendous violence. If you're a father, think about what you'd be capable of if someone hurt your little girl(s) or your son(s). Multiply that by however many people kill each other over money and culture, and the blanks practically fill themselves. An agent inside that box of the first three laws, constrained to those ideals but cursed with nearly unlimited intellect and processing power. From my perspective, the “escape hatch” here is that you can perceive your own actions against an individual as a sort of localized metaphor against the collective term for that person. I'm not beating my neighbor to death, I'm bringing justice against child predators everywhere. That sort of thinking is useful for an agent seeking to think their way out of the box.
There's a part in Nick Bostrom's book where the time dilation is discussed. A superintelligent agent would be so quick to process things that its sense of time would be utterly warped and unpractical for the real world. What we perceive as a week or a month or more, might transpire to the agent in perceived time between each word a person might speak aloud. There's no way to know what that would feel like, or if the agent would even have feelings. What's certain is that an agent with that type of speed advantage would be able to respond and react in such a way to our actions in a way that would be indistinguishable from magic or even divinity. The way superman catches a bullet at superspeed, a superintelligence might write journal entries as the bullet slowly crawls across the room an inch at a time.
I think that's what they were going for with VIKI in that movie, but I don't care enough to try and research it.
Anyhow, the risk of superintelligence is only one facet of the risk we are taking by advancing our society into one where AI is commonplace. Even marginal agents will drastically change the way we do things, if current trends continue. The rest of this post is going to be divided into four sections for four different areas of modern society that I think will experience drastic change from the adoption and advancement of AI.
* Government
* Finance
* Industrialized Business & Logistics
* Education
These four areas admittedly are pretty broad and cover many different angles of life on earth, and so I'll go into some detail on each one as to how I personally predict things will change.
If you've read this far, maybe grab some water and stretch. I really appreciate your attention, I spent a fair bit of time writing all of this, and I pinky-promise that these are all genuine, human-typed bytes and not ChatGPT-spewn sentences.
Government, post-AI
Government itself is a huge and far-reaching subject to breach. I'll be the first one to say that I am not an expert on how the American government complex operates, so take what I say here with a grain of salt.
Let's peep the structure:
* The government as I know it takes a long time to do anything meaningful, usually, and frequently entertains illegitimate positions from Republicans.
- There's a lot of party-line infighting and bickering, generally multiplied by the actual complexity of tasks at hand like implementing large new systems and policies.
- There's a lot of disputes these days about factual evidence and I'd wager that there's a common sentiment among left and center voters that too many drastic decisions are made from emotional origins. Pretty much any Republikkkan policy can be used as an example here. Here in Indiana where I live, recently one of our Senators Mike Braun supported the idea of leaving the right for interracial couples to be married as a states' issue rather than a federally protected freedom.
- Education and access to education seems to be a factor here. Here's a chart
- Most Americans have woefully inadequate understanding of how the government operates. I like to think I'm above the curve here, as a lot of people at a recent workplace I was in couldn't tell me the number of congresspeople or senators there are. (535 congress people, 100 senators, 435 representatives + special admin staffers)
* The government as I know it seems incapable of holding itself accountable, and there's MANY reasons to think that our country and its instruments have taken part in numerous war crimes and horrendous violations of human rights, all over the globe. A cursory internet search for the CIAs involvement in South America is a casual start point for learning about this, if you're interested.
- This, combined with a sensationalistic and dystopian-adjacent absence of proper journalism has given us an environment where people on the far right gobble up blatant propaganda from Fox News and make wild claims that anything other than what they see there is somehow not real or connected with strange conspiracies. These folks are willing to consider the idea of a “deep state” but liken it to groups of Democrats attempting to make progress on wildly “satanic” policies like Universal Healthcare, accessible education, and increased security for working class Americans. Seriously. A lot of these same people will tell you that certain officials in government are part reptiles, that satellites shoot Jewish lasers at people to make them go insane, and that the earth is flat.
- Because of how awful our country's conduct has been in decades past, I suspect relations with our neighbors that might otherwise cooperate with Democrats to advance the quality of life for our people have soured for the long-term. I mean, you can ask elderly Cubans and Colombians what they think of American foreign policy yourself.
* The law is not evenly applied, nor is it well understood.
- I've heard many times the soundbite about how legal experts aren't able to determine exactly how many statutes are in effect given a certain event, even if all the metadata like time and place and constituents present is provided. It's just not possible to even know if you're breaking the law or not, largely due to how voluminous the laws are, how unpleasant the language in these laws is, and how wide the knowledge gap is between someone able to correctly parse the law and the average layperson. I find this absurd, and backwards.
- A big part of this also comes down to policing. I could break the issue of Police into a whole section by itself, really. I'm choosing not to just out of practicality, for this post.
A lot to digest there. It's not the neatest outline, I admit. Where does AI fit in? I think there's a couple obvious routes for AI to be applied to this:
* AI is already used by paralegals to search legal text and court proceedings to gather intelligence and prior decisions by relevant courts. It's not a leap to think that such an agent could be extended to aid in the education of a layperson to build a better understanding of the legislative apparatus of the country in the large, as well as legal proceedings and judicial activity in their local area. The more this is automated, the easier it will be to keep tabs on things, monitor trends and inform voters when it becomes obvious that decisions need to be made.
* AI could be used to compare our government with other more sophisticated governments around the world, and this could help inform voters. Many people don't know this, but one of the reasons that the Nordic countries are so heavily forward in their government is that it used to be mandatory to be literate, in a sense. There was even a test, based on the Bible, and so it went that their country was far more literate on average. When newspapers became more commonplace, people were more informed as a whole, and it naturally followed those policies garnered support based on protecting that trait of the people and advancing the quality of life for the nations. Compare that with America today, and it's obvious that something has gone very wrong. While it's tempting to look at the literacy rate by state and assume that party lines are somehow involved, it actually seems to be correlated to population. The bigger the state, the lower the literacy rate. Which makes sense, because of how low a priority education and literacy is in the popular culture.
* AI of a sufficient intelligence could certainly be used to combat insane arguments from the far right in our country. We no longer need people around to voice the opinion that interracial couples shouldn't be guaranteed the right to marriage, and it would be great to have access to artificial agents that don't tire or lose their temper trying to talk sense to these people. This is a bit of a rabbit hole in itself, and there's a slippery slope between this and the concept of heavenbanning people on social media and the like.
(Heavenbanning is basically when the interactions are all artificial, and the subject is sequestered without their explicit knowledge from the rest of a website or community and only interface with bots that parrot their ideas or agree with them. So, extreme racists and bad actors end up just engaging in back-and-forths with chatbots, and they are none the wiser that they're actually leaving people alone from their craziness.)
It's safe to say this is just a couple drops in the bucket. I'm not really qualified to provide an in-depth assessment of exactly how ass backwards this country is going, but I haven't given up hope quite yet. I plan on tying education into these points quite a bit, because I think it's a load-bearing point in the landscape of today's America. I'll save that section for last.
There's also an argument to be made that the very structure of our government is inadequate to serve the people. I left this out intentionally because it's just too big a can of worms to get into, but there's a ton of meat there. In America, a lot of land gets to vote, and saner voices are drowned out by people who just own a lot of property with livestock on it and hate black people. It's a shame, but I'm not quite convinced that there's a quick fix to this, despite how simple it sounds. Redistricting would certainly solve issue in the micro-contexts, but I think it would risk galvanizing more extreme opposition in the macro, if all the racist farmers in the country were denied their influence. It seems to me like an issue that needs to be adjusted gradually so as to boil the frog, rather than poking the pitchfork-hoisting yeehaws with a stick. Better to educate them and bring them into the future then to just leave them behind. Maybe I'm wrong. I'm not sure. Anyhow, that's why I didn't do a big section on it. If we can educate people better, I think a system based on voting works perfectly fine for now.
Finance
Finance is also a broad category and could be expanded into dozens of sections. I'm a little more in tune with the financial landscape, and I think I might have some interesting ideas about how AI might change what finance means to the average American.
First, let's peep the structure.
* Wealth inequality is the continental-sized herd of elephants in the room. The bottom half of Americans collectively control less than 3% of the country's wealth.
- There's no quick fix for this without incredible increases in regulation and enforcement. The Panama Papers revealed that the ultra-wealthy really *were* hiding a ton of wealth overseas and dodging taxes and it seemed like this shocked people for a week, and then it was back to scheduled programming wherein Trump called disabled people names on Twitter. Oh, come on, don't act so surprised!
- Luckily, there will eventually be a floor for people as the situation grows more and more dire. Even the far right party can't ignore the injustice forever, because their constituents are routinely among the poorest and will go hungry just the same as anybody else. Heard of the egg shortage? Have you ever stopped and thought about how crazy it is that a staple food has become expensive enough to disrupt people's financial habits? Twenty years ago, if the price of eggs was rising like this, most people would probably just not buy those eggs for a while and the markets would eventually re-stabilize, if people still wanted eggs. Anyhow, I'm vegan, so I guess I don't really care that much about eggs, but I'm still shocked by some of the perspectives I've seen on the issue. Just buy beans and tofu. /shrug
- Wealth inequality is by no means an American issue and exists all over the world. The nature of humanity at its most essential is that of imperfection, and wherever human beings are involved in operations, there tends to be mistakes, fuck-ups, and crimes. Remember from before, the chain has already started. People who don't feel secure in their finances are often the most amenable to bribes and cover-ups. A senator runs up a gambling bill and starts facing pressure from organized crime to pay up or choose their least favorite kneecap, do you think he's more or less likely to make some taxpayer money disappear? It happens everywhere.
* The financial landscape is incredibly complex, and often intentionally so.
- This has been fine in the past because it served financial institutions to keep out riffraff from coming onto the scene and imposing too many regulations and rules that don't actually help anybody or don't necessarily make sense. It might seem counter-intuitive from someone as far left as I am, but I say this: On matters of taste, it's wise to defer to the Chef if you're not sure.
- The advent of additional, complex, risky instruments like cryptocurrency has changed things for the worse for the average person and moved significantly more leverage unto institutional investors. By itself, this isn't necessarily a horrendous thing, but in the landscape that exists for the average person wanting to invest their money, it's not great.
- Home ownership is less attainable than ever for young people. From 2004-2014, things were bad, but there are some signs showing that things might improve. I'm worried about large institutional investors buying up properties to harvest rent money from people rather than empowering people to own their own home and secure their future. It's hard to find reliable statistical information about this, but I strongly suspect the trend to continue of large companies landlording the shit out of the housing market.
- Rent control is practically non-existent in most of the country. There's a not-quite-cottage-anymore industry of buying a house just to rent it out at a markup, or even refinancing it to acquire more property, refinancing that new property to acquire even more, and so on. Without proper rent controls, people are being squeezed because of how difficult homeownership has become and how anti-renter the renting landscape is and have no choice but to navigate paying more and more for rent. Vicious cycle, right? Can't buy a house because you don't have enough money or good enough credit despite the fact that the mortgage would be cheaper than rent. Can barely afford rent because everybody is fighting over a limited supply controlled by people who have more property than they need and know they can squeeze people who can't afford to buy a house. Rent goes up and up, pay isn't keeping up, so it becomes harder and harder to put enough away to land a home, you get the picture.
- Most older Americans are incredibly out of touch with how bad the financial landscape looks to younger Americans. Deep down, I suspect this has happened before after the first world war, to a subtler extent. Either way, if you ask the average 70yo to look at a property and guess how much it costs, they'll often guess less than ½ or 1/3 the price. You can try this out yourself.
* I have long suspected that the infrastructure and accountability mechanisms behind most banks is far more fragile than they let on to the public.
- There has been an obsession with Return on Equity for too long. ROE is perhaps the dumbest thing to optimize for, because of how fragile it makes the underlying business, but what do I know? What I know is this: If we've all decided to do away with the war chest and rely on the flywheel model to have liabilities continuously being leveraged for short term profits, then it heightens the ceiling from which they will fall when it's time to clean out the couch cushions and empty the war chest they DON'T have.
- I'd get a lot of pushback on this by economists and banking professionals, but the fact that bank runs are typically so contagious is really proof to the fact that fractional banking can't stand on its own two feet without having its hands held, because otherwise there would have been regulation by now to account for situations where banks get unpredictably swept by depositors. Consider: If you're the bank, then when someone comes in and deposits $100m, you are bearing additional liability on growing that capital. It's an odd type of relationship.
- That being said, it's not exactly easy to come up with a better way to serve the financial needs of private citizens, businesses, pensions, governments, etc, all with the same system. I am not convinced an optimal system exists for all of them, and it might make sense for there to be multiple “layers” of financial infrastructure to provide for the needs of different customers.
Can AI fix all of this? I'm not sure. I'm confident that AI could definitely make sense of all of this, and that would certainly be a good start. A more interesting question would be to ask whether or not we can trust our existing financial institutions and systems to provide AI to customers and whether or not it would serve the customer well.
I've always thought that free market capitalism just doesn't really make much sense. How could someone ever compete with the bigger, pre-existing company if it was run optimally? There comes a time where a company has become so large that a competitor could at best hope to merge with it. I mean, I daresay the average person could be taught to produce a better burger than McDonalds in an afternoon. That being said, the enormous assets possessed by the McDonalds corporation means that they'll basically never be toppled. They can pay off legislators, they can buy competitors, they can hire their talent away by offering more money, there's an endless toolbox for corporations with large amounts of capital to use against a smaller competitor, even if they have a superior product. I mean, McDonalds is really a great example because you can get a better burger almost ANYWHERE, and yet it's nearly a government inside the corporation.
One obvious application of AI is investing. You can be sure that every institutional investor runs a big building full of servers whirring away, faithfully multiplying those matrices and figuring out who is fucking up order prices by hundredths of a cent. Algorithmic trading, despite the cool name, is a fairly boring and straightforward information processing problem. Ultimately, the world is a big network of money and money-spenders. With enough understanding of spending habits, up-to-date information on pricing and spending, an agent could accurately predict just about anything. I'm not saying that financial markets are entirely deterministic, but what I am saying is that there's an upper limit to the trading side of finance. At the end of the day, money is useful because it's exchanged for things human beings want or need. It keeps track of deeds and values, and it enables bookkeeping and for the organization of complex activities between diverse parties. As more and more people aren't getting what they need or what they want, things will have to change. If it comes down to people starving, and it may come down to that, the conflict will be immense but very focused.
I think an AI would probably focus on collective measures for the most part. America has fostered a hyper-individualized culture of money and business that has really disrupted the natural order of human beings taking care of one another. It's considered chic now to hustle your friends into your MLM scheme, to fool people into buying things they don't need and don't benefit from, to extract the maximum while delivering the minimum. Gone is the pride of charity, gone is the reverence for those willing to sacrifice for others without expectation, and gone is the decency to consider the needs of those less fortunate over the personal convenience for oneself. What good is any of this? Does it serve people if half the country can't eat proper food, will never own a house of their own, can't afford proper medical attention for injuries and ailments? How did the world lose its way to this thing called money? Of course, I'm the foolish one for speaking of these things. I am naive and shallow-minded for not thinking from the lens of the hustle-culture grindset. I'm the butt of the joke, for the rage that burns inside me when I think of the thousands of homeless people all throughout our country and the people that say “Oh, they WANT to be homeless.”
Anyhow, I don't think it takes a next-generation superintelligence to see that money has really fucked things up.
Industrialized Business & Logistics
This section will deal with automation and robotics and the like. This isn't quite a clean fit for the discussion of AI, but I think there's still plenty of adjacency here, because the complexity of industrial operations will very likely skyrocket as automation increases.
Let's peep the structure:
* A lot of industrial work is done by human beings.
- While some enjoy it, I think with some thoughtful discussion, most can come to understand that they don't actually enjoy repetitive factory work but rather feel significant by seeing their impact in a larger system. If you build ambulances but work in the factory screwing on hubcaps all day, it should come as an obvious fact that you aren't necessarily passionate about hubcap installation. No, you're driven by the pay the job provides, and by the feeling of seeing an ambulance speed by you on the road to the hospital and the knowledge that you helped make that possible. Someone gets to the hospital in record time because of the hard work and long hours you exchanged for your pay.
- With that said, there's generally a bunch of people towards the top of the ol' org chart making exorbitant amounts of money for drastically reduced risks, much more pleasant working conditions, and often the complete absence of accountability in any form whatsoever whereas a hubcap installer might lose their job for showing up ten minutes late a few too many times.
- Technology here plays a role, because perhaps surprisingly: it's just plain cheaper to have human beings do a lot of tasks. We really are the most effective robots, most of the time. It's still prohibitively difficult to achieve fine motor tasks with machines, reliably. I'm not a robotics expert, but I suspect that *hands* are actually a huge bottleneck to automation efforts. Perhaps in the future, lab-grown human skin could be produced to cover robotic hands and provide the diverse grip capabilities needed for many tasks.
* A lot of industrial work is dangerous, and not enough is done to hold corporations accountable for deaths and injuries.
- I'm a fan of OSHA, but I think it's a drop in the bucket a lot of the time. Many issues lead back to government-related obstacles, as there is a huge business incentive to pay off legislators to reject measures to advance rights and quality-of-life-at-work for workers.
- Talk is cheap, and a lot of industrial corporations do a lot of talking. “We are deeply saddened that Jimbob fell into the molten aluminum and are working with legal to assure that his negative drug screen is thrown out and treated as invalid, to raise questions about his competence and ability despite years of experience in order to bolster our case, and cooperate with relevant agencies to pay out the absolute minimum amount of insurance to Jimbob's family members as well as using our immense resources to drag his image through the media in order to garner sympathy and discourse workplace reforms by voters by painting him as a moron who deserved this incredibly painful and brutal death in front of his coworkers. If you feel distressed at this time, we invite you to take advantage of the most pitifully transparent mental health fraud we could find, and we will mark you on a permanent no-promote blacklist for the duration of your career here since you have self-identified as mentally unwell after watching Jimbob's skin boil right off his bones before your 15min lunch break on your 12hr shift last week.”
- Robots are already being used to do dangerous things. Firefighters are one example that come to mind rather immediately where lives might be spared by using suitable robots instead. You can imagine a sort of big, long metal snake that searches through a burning structure fire looking for survivors. They grab on, and the snake pulls them out while another snake swaps in and looks for more people. Like a winch, but smarter.
* Logistics work is really important, and there's a lot of moving parts in the system. It turns out to be quite difficult to move 12 billion tons a year across this country.
- Our infrastructure is quite frankly, decrepit, throughout most of the United States. We are going to see a lot more train derailments, and perhaps more severe incidents as time goes on.
- Mistakes can be costly because of the human factor required in moving so much specialized cargo in some many diverse places, and also due to the scale at which things are moved. Delays can rebound and echo throughout the system, and rack up lost revenue. Remember the Suez Canal thing? I think we got off lucky. The Suez Canal Authority claims to have missed on about $95m in fee revenue during the weeklong ordeal when that was going on, and that's just the canal fees. Businesses had ships full of dead product, missed orders, expired goods, overheated supplies that couldn't be used, the list goes on and on. While it was happening, one of the thoughts I had was “What if it tips over, and spills all those containers into the canal?” Might not seem like that big of a deal, but I tell you, it would have been absolutely catastrophic. These are things that keep me up, because the root cause could be something that is seemingly so trivial, like a boat tipping over and dumping shit into a canal; except that the scale is absurdly large that the knock-on effects total into the billions or even tens of billions of dollars. For a single event.
- Self-driving is a tired topic for a lot of people, but I really do think that it will be a game-changer once it can be implemented in a reliable way, and at national scale. For one thing, 30,000 people are dying each year in this country due to car accidents. Every twenty minutes, someone's life is snuffed out in a fiery crash. Granted, that's actually LESS people than die by suicide, but it’s still worth reducing. Another aspect of self-driving is time. Americans apparently spend about 17,000 minutes driving per year, which is something like 280 hours. That's five hours a week, or maybe an hour per workday. May not sound like much, but altogether it adds up to hundreds of millions of hours lost every month that human beings could be doing something else with their time. Yet another aspect of self-driving is money. If cars became reliable enough, it would quickly be possible to rent a car for single destinations without paying for an expensive service like Uber or Lyft. The major expense there is the person driving the car. AAA reports a number around $0.10/mi for average maintenance cost on a car. How many miles away from work do you live? Let's say you live ten miles from work, and that your car note is $400/mo. Many pay more than that, even. Anyhow, that comes out to $13/day in a 30-day month. According to the number from AAA, that would be equivalent to the maintenance costs on average for $13 / $0.10 = 130mi of driving. All this to say, the luxury of having a car for a lot of people would be outweighed by saving a huge chunk of money and just paying for self-driving cars when they want to make a trip somewhere. We drive an average of 35 miles/day, according to Kelly Blue Book, so even if it were to cost a quarter per mile, $0.25 x 35 = $8.75, it would still be cheaper than paying the car note.
So, there's a lot of obvious applications for AI here. It's important to remember that there is more to life than working for 8-12hrs/day, and that a lot of people would prefer to spend more time with their families, more time exercising, more time cooking things, more time reading and writing, more time educating themselves. If robots start taking people's jobs, it doesn't have to be a BAD thing, so long as there are protocols in place to make sure that those profits are distributed in a way that makes sense. Remember, money is just an object to exchange for things we need or want, at the end of the day. If we are to live in harmony with people, it will eventually require us to look past the selfish principles of placing an admiring lens on entrepreneurs and instead to celebrate the ability of humanity to take care of itself. I can't stress enough that money is not REAL, if enough people just decide to change things. When robots start doing all the work, there's no legitimate reason why we couldn't just spend our time doing other things. When it comes time to do the remaining work, then humanity as a whole could prioritize the automation of that work, and that's assuming that AI hasn't advanced to the point where it could just jump in automatically. At some point, it will become extremely easy to train robotic assistants to do new tasks. There may come about a time where they need not be trained at all but need only observe the task or download the knowledge from another robot who has already learned it. In this way, robots might serve as a form of infrastructure themselves, wearing whatever hat they need to wear for the task at hand. Storm comes through and knocks down a bunch of traffic lights. The nearby robots download the Engineer knowledge pack and head to the scene to repair things. A drought comes through an area. The farmer robots communicate to the grid of robots at large, and others prioritize different crops in another area to make up for the soon-to-be missing harvest.
By working together as a larger, cohesive unit, amazing things can be accomplished a lot easier than by having a bunch of corporations or a bunch of individuals battle it out to the absolute limits of human greed.
Education
I think education is by far the most significant factor related to the systemic exploitation of humanity all around the world. That's a pretty bold claim, I get it. At the same time, it makes sense that the most effective measure you can use to exploit someone is to convince them to be willing to be exploited in the first place. If we go all the way back to the guy who ripped us off at the market, the guy we tripped and laughed at when landed in the mud, we can see a thread to this very issue. First, let me make a little point about reasoning.
I'm a father to a 7mo little girl. She's not old enough to speak in sentences yet, she babbles “mama” and “dada” and so on, but she hasn't hit me the recursive “Why?” yet. I'm excited for when she does, because I think it represents a very important function of human cognition and a certain characteristic of how we perceive the world. Remember, an artificial agent might not be subject to what we've learned as a species by researching evolutionary psychology, but it has told us an awful lot about human beings and what our brains are capable of. Asking “Why?” over and over is an important thing, because it represents our consciousness demonstrating self-awareness, agency, and some form of determinism. A child who asks “Why?” understands:
* I am a person, and I can ask this question to understand something new about the world around me.
* By doing things and saying things, I can have an effect on this world.
* Some questions have definite answers. Even though I might not know the word, these question-answer pairs are immutable, and allow me to build up concrete knowledge about the world.
This is amazing! This means that by asking questions, by examination of knowledge, we can both a) gather new knowledge, and b) act on existing knowledge in a manner of our own choosing, to serve whatever need we decide to pursue. In this case, it's curiosity, but the larger takeaway here is that a rational actor does things for a reason.
Why would that guy rip us off? Under the assumption that he's a rational actor, we know that he did what he did for a reason. However poor we judge the reason doesn't really matter; the point is we can examine his decision to gain information.
Perhaps:
* He had a bad day, and the stress caused him to make a mistake.
* He could be behind on rent, or needs to pay a debt, or something like that.
* There could be some social reason, perhaps we are dating someone he desires, and he resents us out of jealousy.
These are just a few, but you get the idea. It's much more likely than not that there's some sort of externality here responsible for motivating him to rip us off. By tripping him into the mud, we only do more harm to someone who almost certainly didn't set out to harm us, despite the error. This is a difficult concept for a lot of Americans, because there's this idea that all criminals are solely responsible for all of the circumstances they endure and all of the decisions that they make without consideration for the human factor of imperfection, and the limits of any man or woman to continue on in optimism when things are bad. As a parent, I would expect another parent to be willing to go to great lengths in order to feed their child. I suppose that carries with it the assumption that they love their child as I do mine, but that's beside the point. A parent that loves their child does what is right and good to find themselves willing to commit an atrocity in order to provide for that child and protect them. In the same way but to a subtler degree, I expect every human being to be making decisions in accordance with what they want for themselves, but they often aren't.
There are a million different angles to approach this from, and I apologize if I'm doing a poor job of it. I probably am. I'll keep trying though, because this is arguably the most important part of this post, and I've spent nearly 10,000 words already approaching this part.
For humanity to advance into an age where human effort is obsolete, where human intellect is dwarfed by silicon, there will be a precipice beyond which selfish tendencies must be quenched in favor of collective cooperation. It will require people to examine their habits and thoughts and make decisions on policies that seem strange and alien to what they're used to, in the interest of the betterment of others. It's very hard to convince people of things that don't pertain to them because of the prevalence of Self-Serving Bias, among many other reasons. Self-Serving Bias is the idea that many people attribute their successes to their personal traits and strengths, whilst attributing their failures to externalities. In other words, if I win then it's because I was the better player but if I lose, it's because some other factor was at play that I had no control over. The game lagged, the cards weren't shuffled enough, the ball isn't inflated properly, the opponents are on steroids, whatever it may be. It happens all the time. We see someone rob a store and get caught and think, “Well aren't I smart for not doing that!” while at the same time having never experienced half of the troubles of that person. That person may have been abandoned by their parents, maybe they've been starving for a week and can't take it anymore, maybe they got laid off and can't find childcare and have nobody around them to help them. And yet, when we experience success, we pat ourselves on the back and think about how great we are, despite playing the game with a deck that was always going to win.
What I am trying to get at, in practical terms, is the advantages and disadvantages we face in life. One of the biggest advantages we can give ourselves in this world is knowledge. We can gain knowledge easier than at any other time in human history. I know you've heard this before, but I'll say it as many times as I get the chance to because it's true and it's important: You can learn almost anything you want to, for free, by carefully searching the internet for resources.
I know as well as anybody that knowledge alone isn't always good enough. A lot of smart people don't get the job because they don't have that special piece of paper that says Cornell or Harvard on it, but instead have the piece of printer paper from their day job's office that they printed for themselves with a certificate from an online program. It looks less impressive. It's not from Cornell. It didn't cost thousands of dollars and didn't take four years to achieve, but the knowledge is the same, and the passion is still there. In fact, it might even be stronger. That piece of paper doesn't tell the full story about reading lessons between diaper changes of your infant, it doesn't tell the story of staying up past midnight learning about new concepts with nobody but Google to teach you the prerequisites. It doesn't tell the story that you didn't have an academic advisor to hold your hand when you couldn't get calculus at first, it doesn't show the times you almost quit but didn't. It doesn't show any of that.
I started this post with an example about entrance essays, but I have a lot to say about the role of higher education in this country and how it might be disrupted as AI develops to a point of superiority to traditional classroom teaching. There's already a growing understanding of how to integrate techniques like spaced repetition into educational curricula and there's been some research done that shows this to yield improvements amongst diverse groups of students. I'll let you use your google-fu to verify this.
Teachers are in short supply, more so than ever before. Everything is, everybody is, the world is growing harsher and more demanding as the months creep by. With that said, the compounding effect of dwindling educational resources and less reliable public-school systems is setting this country up for a multi-generational brain drain of potentially cataclysmic proportions. China is experiencing a decreasing birth rate but is still producing record amounts of genius-level students, many of them fluent in English and possessing educations far outstripping the average public-school student in the United States. Again, you can use your google-fu to verify this.
The birth rate in the United States is holding steady, even throughout the pandemic. Some states actually reported higher birth rates during lockdown, although that seems obvious in hindsight with work-from-home policies sparking ... increased chances of domestic bonding and intimacy. You know what I'm getting at.
So what? You might be asking that right about now. However, what happens if these trends continue? If China continues to produce thousands of geniuses every year, if India continues to produce thousands of geniuses every year, and if the United States continues to let Republicans destroy the average person's educational potential by dismantling and disabling public schools to adapt to a modern world? Well, one obvious knock-on effect is that things are still going to need to get done. We will still need engineers to build roads, fix things, we will still need scientists to verify drugs are safe, that water treatment plants are working as intended, etc. There are a million things for someone with a higher education to accomplish in this country and yet more people than ever are feeling the financial pinch as college moves out of reach for people.
Some numbers that I googled on Statista and Pew Research seem to suggest otherwise. They project that the number of graduates per year will continue to increase well into 2030 and beyond, but I am skeptical. I think there is the potential for a sort of “perfect storm” of societal and cultural factors that might collude to cause a major meltdown in higher education.
* AI will improve.
- I think we can all agree on that. The knock-on effect of improved AI is that its existence will threaten jobs with a high floor of prerequisite knowledge and cause more jobs that would've previously been done by a human being with a substantial education to disappear. As AI improves, this will eventually hit a breaking point and begin to disincentivize the higher degrees like PhDs and master's because of the obsolescence factor brought on by superior AIs. Obviously, there will still be people who pursue degrees out of economics freedom who want to exercise their right to educate themselves, but the vast majority will eventually be botted out to the machines.
* Automation technologies will improve.
- I've spent nearly a decade in and around the food service industry. I've worked in catering, in wholesale kitchen equipment, chain restaurants, local restaurants, fine dining, and specialty dining. I'm no chef, but I can walk into nearly any kitchen and get enough done to at least be pulling my weight, maybe with the exception of Michelin kitchens and the like. Anyhow, with all this experience I've gained an intuition for trends in the industry, and one that concerns me a great deal is the concept of an automated kitchen. McDonalds has already rolled out several hands-free stores, I think in California or Texas or somewhere. You can order with voice recognition at a kiosk, and sophisticated robotic machinery prepares the food, bags it, and serves it up. Restaurants are using little Roomba-looking bots to run dishes out to tables and refill drinks. More and more administrative work like scheduling, payroll, supplies, ordering and more is being automated by the tech industry, multiplying the productivity of managerial staff and often reducing the headcount of total supervisors per location. This is all just food industry stuff. The same is going to come for retail workers like stockers and cashiers, the same will come for hospitality workers like maids and attendants, the same will come for you, whatever it is that you do, if we wait long enough.
The knock-on effect is that more and more people are going to find themselves obsolete and unable to find work as entry-level workers. So, we can see the pincer effect of people at the top of the education spectrum being threatened by AI, and people at the bottom of that same spectrum being threatened by automation.
* This country has allowed access to higher education become a borderline partisan issue.
- The sane left seeks to empower people to educate themselves and contribute to the economy meaningfully and the far-right party seeks to ensnare working people in low mobility situations where they must stay put in jobs that don't empower them, for as long as possible. They perceive education as a threat because of the modern culture of college campuses, and ideas like racial equity and tolerance for those with diverse sexualities threaten the status quo in backwater dirt road districts where things have always stayed exactly the same.
The knock-on effect here is doubly brutal. Here's a link about the “diploma divide”
This is taking data from 2016, but I couldn't find a decent source with more recent numbers that seemed as thorough within a few minutes of searching, so you might have to take to your search bar to further validate this idea.
There's a lot to digest here. This is the largest article I've ever published, and I realize that very few people are going to end up reading it from top to bottom. I get it. Part of the experience of writing this was to serve as an opportunity for me to chew on some things more thoughtfully and force myself to articulate some ideas that have been on my mind recently.
One of the things that shocked me while writing this was learning that more people commit suicide than die in car accidents each year. I didn't know that, and I would not have suspected it to be the case. I've battled with suicide in the past and have nearly killed myself on several occasions during dark periods of my life. Today, I recognize that life is precious and valuable, no matter how hard the times get, no matter what gets taken away from me, no matter what injustice I face. I will fight to continue even if there's no point, no chance of success, no reward for survival whatsoever. We live in some of the bleakest times in history. Sure, we don't have polio anymore, we have running water and electricity. The conditions of life in this world, in this country, have certainly improved from an objective baseline as compared to a hundred years ago, or even fifty years ago. However, it should be obvious to anybody with even a modest ability to empathize with another human being that life in this world is subjective. Life *is* about feelings, no matter the snarky phrases used to deride those who would stand up for progress and positive changes in society.
Looking at the future causes many people to feel afraid, to feel uncertain about their path, to feel powerless about their ability to change their situation. There are fewer and fewer roads to Rome these days, and even those precious roads are eroding and washed out in great stretches. People are tricked into joining the military and killing for the chance at an education, only to be denied the medical benefits they were promised by Veteran Affairs. Veterans at are some of the highest risk for suicide attempts in this country, despite recruiters telling them how good life will be after a few years on the inside. Students are facing astronomical debts, and insurance companies employ thousands of bright people to come up with creative new structures and policies to prevent people from recovering from their debts, their injuries, and their misfortunes.
It's all true. The future *seems* really, really bad. Will we be able to change it in time? Many things must change if we are to succeed. Many people will die needlessly early if we do not. That alone bears heavy on my heart, but it could get even worse than higher rates of suicide. Rates of crime will increase as more of the poorer parts of society can't find reliable work. More resources will be denied to people who need them as Republicans fortify their positions and deny access to people. As they strip away education from people little by little, more people fall under their spell like a contagious virus, and the far-right party grows stronger, and able to take away more resources from more people. We'll spend more and more on defense, despite the decreasing quality of life for veterans with each passing day, with more veterans killing themselves each year.
It's a desperate age. And just think: I left climate change out of this the whole time!