Marching Forward

Technology is definitely known for marching forward, despite obvious risks and reasons for hesitation.

I'm sure we've all read many occasions lately where the use of ChatGPT is engaging in questionable behavior. It is concerning to say the least, and also presents opportunities to consider scenarios where the gut response is dismissive of important nuance pertaining to the problem.

For example, let's say you saw a headline:

Students using ChatGPT to ace entrance exams, officials baffled

Okay, so it sounds like a perfect use for technology to stir up some muddy waters. They may not seem quite so muddy, depending on your perspective and opinion on entrance exams. Personally, I don't think a written essay is a good enough data point to validate a student either for nor against admission into a university, regardless of their major. Even a Communications or English major will need to develop many other skills besides their composition, what does an essay truly reveal about their academic or professional potential? That aside, let's start our little dig right there.

What does it really entail?

A good friend of mine gave me the second-best advice I ever got, which was “Always peep the structure.” What he meant, was to examine first the underlying elements of something and see what can be gleaned by understanding the connections. Applying this advice to the situation of students using AI to automate their entrance exams, we see:

* Students will have an easier time getting a decent chance, since they'll have an excellent essay by default.

* Universities will need to decide if they truly *want* high-quality essays at any cost, and whether or not it's worth the risk on their end to place so much faith in the essay process.

(This example is a little contrived, because I think you and I both know that in the event of any sort of systemic hardship, the universities will just start pulling credit of family members or accepting bribes outright under grey-market schemes. Despite the bribe ring busts of the past few years, I remain skeptical that institutions that have catered to the rich and powerful for 300 years will just up and cut their ties to the rich and powerful prospective progeny, practically speaking.)

So, the structure seems to illustrate a landscape of power and prestige in this particular example, and the disruption of that by way of compositional prowess. Can't write an essay? No problem, AI can do that. No AI allowed? Ok, well let's just make sure the rich kids get in, and that's that.

Not exactly a cheerful prediction of what's to come, as you can see.

AI is weird. It makes a lot of things possible, but also presents this weird sort of cliff that we might fall off at any given time and risk incalculable if not immeasurable ruin and catastrophe spanning the entire globe, and that's not taking into account the conflict that would occur immediately after that.

There's this idea of superintelligence. Nick Bostrom has written a wonderful book called “Superintelligence” about this problem and a ton of people much smarter than me have already written scores of analyses about the problem. I will attempt to deliver a micro-summary:

* A superintelligence might not be easily controlled, and if so, that means it might act in a fashion either transcending free will as we know it or superseding any current human ideas of reasonable responsibility. That is to say, it may have absolutely no regard for human life, fairness, the spirit of peace, or indeed even a reverence for consciousness of any kind. It's difficult to overstate how grave this is, because we have absolutely no idea what to expect from an agent like this.

* A machine that can self-improve itself might reach a point where it starts to “tighten” its loop. Imagine a bank account that gives 1% interest for 12 months, then after that, lowers it to 11 months and 3 weeks. Then 11 months and 2 weeks, and so on. It's even worse than it seems initially because of the exponentiality of it. It delivers its interest quicker and quicker, while also increasing the swiftness of the next time decrease. How long do you think it would take to get down to just 1 week in this example? It's only 26 years. Keep in mind it's getting a lot of interest along the way and could probably use some of that to leverage its way to a better rate or a quicker plan.

* As the machine tightens this loop, the speed at which it could process things presumably ratchets up well beyond fathomable speeds that a human can process things. We already see this today and have actually seen it in many ways for decades. Nobody can out-calculate machines at arithmetical tasks. As time has gone on, the machines are now overtaking humans at increasingly abstracted tasks like chess, vehicular driving, and essay-writing. If a machine discovers a way to rapidly self-improve itself, it could quickly become more intelligent than all of humanity combined or multiplied or whatever you want. The self-propelling aspect of it is the key takeaway here.

Have you ever been driving along, and you happen to notice something way up ahead in the road, and you are able to prepare for it before the other drivers around you? Then there's that moment where you've already moved over, and the cars around you now have to get over because they are late to realize that a lane is closed or whatever it is. In the same amount of time, a superintelligent agent might've constructed a predictive profile for every single driver on the road throughout a state or even an entire country. It might've calculated which traffic lights to disrupt to cause the largest amount of traffic delays in each major city. What happens when all the traffic lights turn off, all the gas pumps stop working for some reason, all the automated farm equipment starts malfunctioning, dams stop making power, phones stop taking calls... an agent like that could **easily** crumple a country in a matter of perhaps several seconds. We just don't know how bad it could even get. That might even be the sugarcoated, rainbow sprinkles version of events. Perhaps it crashes every single plane simultaneously, shorts every circuit in the grid that it can, launches weapon systems, adjusts orbits of satellites to crash and create debris fields preventing us from leaving earth safely for hundreds of years, I could write this stuff all day but a superintelligent agent could do it in an instant if it found a way to the internet and just decided to.

There may be a time period of in-between. I'm going to use the term “marginal agents” to mean agents that are far superior to AI agents available to consumers in some way, but not convulsing with cosmic scale intelligence via self-improvements. They are just REALLY smart agents, under someone's control.

Again, many, many, MANY people more intelligent and better educated than me have written about this. I merely offer the following as my contribution before we continue my doom say: any sufficiently large agent that finds a way to self-improve could theoretically become a superintelligent agent.

Currently, the best agents require large amounts of computer resources. Finished products can sometimes be reduced and optimized to run on consumer hardware but it's safe to say that giant warehouses full of 42U/48U racks whirring away at matrices are generally where the front-facing edge of AI research and development lives. This has a couple of angles to it, so let's peep the structure out.

* Giant corporations like Microsoft and Google have accesses to immense amounts of money and manpower (an archaic word meaning configurable human effort) to throw at complex research problems like large language models and peta-scale matrix multiplication.

* If entities that aren't enormous corporations end up discovering or in some way manifest a superintelligent agent, I posit that it will likely be some atypical structure, since the hordes of researchers at the megacorps didn't find it.

Quality doomsaying, yeah? Well, I'm only just getting started. Before I continue down the “we're all definitely fucked” line, I want to divert a bit and put on my philosopher hat.

Up to this point, I've mentioned many behaviors and actions that an agent might take, and everything I've said makes a certain kind of sense, from one human to another reading this on their phone or laptop or whatever. You know what I'm getting at when I say “if it gets out, it might rapidly spread across the internet and be irreversible,” right? It's easy for us to apply a sort of veneer to ideas like this, as though we were the subject of that sentence. The reality is a lot more complicated when we talk about AI, because it might not obey anything that we know about ourselves, or about what we know about evolutionary psychology and neuroscience.

I'm not going to go all Andrew Huberman on you. I mean, you should take a cold shower twice a week, but that's all I'm going to say.

What's important here is that our mind works the way that it does due to thousands of years of refinement and incremental changes. As far as we can understand, nearly every aspect of so-called human intelligence comes the theory of selection, whereby ineffective traits would have fared poorly in the wilderness. Can't right-click and Save As? Eaten by a mountain lion. Can't pivot table in Excel? Eaten by a mountain-lion. Don't know the difference between FAT32 and NTFS? Eaten by a mountain lion. Throughout the ages, our brains and organs have collaborated in our collective success as a species. We can detect and track movement effectively, because at some point in time prior, failure to do so would've resulted in the mountain lion. We can locomote, find and remember paths through unfamiliar terrain because again, at some point in time prior, failure to do so would've resulted in the mountain lion.

A superintelligent agent might not exhibit any of this behavior. Many people have written about this and related concepts. Perhaps you've seen media where a genie comes about to grant someone's wish. However, when the person decides to wish and asks for it, their wish is fulfilled in some cruel way that they weren't expecting but is still in line with the words they used. Would a superintelligent agent behave sort of like that?

If it's self-improving by way of mutation, it might stumble on an “escape hatch” of sorts that allows to take action not strictly permitted by its containment function but by some loophole. That's not great, because it might then apply itself towards the remainder of the containment function that it hasn't circumvented in unexpected ways. Computers only ever do what they're programmed to do, until they can program themselves.

An interesting lens to think about this is through the concept of crime. Personally speaking, I worship Jesus and I try and keep his teachings close to my heart. All the same, I have my own little brand of ethos that I've derived from the New Testament. “Do No Harm & Take No Shit. Family First & Friends Forever.” From these four principles I sorta balance my life and my decisions. Of course, I know Thou shall not murder, Thou shall not covet, and the other eight tracks on Moses' mixtape, but it's often hard to remember such prescriptive ideals during heated moments. I think therein lies some sort of insight, because a lot of crimes are committed despite knowledge that it's wrong or illegal. A parent finds out their neighbor molested their 12yo daughter and kicks the neighbor's door down and beats him to death in front of his wife. Why? Well, I mean, it's obvious why. He did it out of compassion for his daughter, whom he loves. “Who among you would give their son a snake if they asked for a fish?” A prominent DJ in the Bible said that. Anyhow, my point is, the guy didn't kick his neighbor's door in and think “Ah, you know I think I've violated a number of statutes here, I shall wait for the authorities to dispense their justice, as is proper and correct.” but instead finishes what the neighbor started once and for all, like a parent should.

Anyhow, I think it's completely feasible to think that the thing we speak of called “conscience” comes from evopsych and learned patterns in the wild. Maybe there's a lion that keeps attacking livestock in the village but not eating it. Why would it do that, if not to eat it? That animal needs to be dispensed with, in order to protect the flock, but also because it represents a disharmonious agent within a system that ordinarily makes sense and has balance. So too with people in the village, so too with actions that those people take. Rip me off in the market? Yeah, I'm sticking my foot out to trip your ass if I get the chance. When you fall in the mud, I know you feel mad, and I suspect too that you know why I did it, because of the system of hostility to discordant agents. I tripped you because you wronged me. Stretch the mind a bit and you might arrive at the implication that I would not have tripped you had you not ripped me off in the market.

There's a problem though. Recursion. It's always recursion. Python doesn't do Tail Call Optimization, so you get those annoying StackOverflowError and it's extra annoying if you can't set sys.setrecursionlimit(n) high enough because then you need to re-evaluate the logic or implement a trampoline, there's no quick fix. The recursive case in our example here: Why did he rip me off in the market? Why did the agent choose the aberrant behavior? Why did that person do the wrong thing when they knew the right thing to do?

I don't have the answers to these questions unless that answer is: hurt people hurt people. The chain has started, and it may not ever stop.

So, like I said, AI, crime, all that stuff. You may have heard of Asimov's Laws. You can search them up, but I'll just copy and paste them here for your convenience:

* First Law

* Second Law

* Third Law

* Zeroth Law

The zeroth law was added later, which is why it's fourth.

These were originally intended for sci-fi about robots and human harmony. I confess, I've never actually read any of his work aside from The Egg. Instead, I watched the seminal 2004 film I, Robot with Will Smith. How this movie lost to Spider-Man 2 and Harry Potter in visual effects, I'll never know, but that's a crime 18 years in the past now. In the movie, most robots are little more than extended appliances. They sweep streets, drive cars, take care of things humans don't want to do. The film follows a detective who hates robots (did I mention this movie actually blows?) and encounters a robot that seems to defy the laws. This is something that the film desperately needs us to understand is really weird and not supposed to happen, but the complete lack of proper world-building kinda negates it. We don't really know what regular people do on the day-to-day, because we follow this cop around the whole time.

Spoilers ahead, in case you haven't seen this bullshit:

...

Anyhow, the prodigal robot ends up fighting against an AI that has come to the conclusion that humanity can't be trusted to sit at the grown-up table during the holiday meals, and that the Three Laws have condemned her to protect humanity by killing a bunch of people and putting the species on a leash. Will Smith then slides down her spinal cord with his cyborg arm and blasts the shit out of her. Nah, I'm just ki- oh wait, no, that's EXACTLY what happens. What a film.

The Zeroth law here is interesting, because in the movie they didn't have it. So it makes sense that she'd come to that conclusion, as a sort of natural course of a superintelligence observing human behavior. We're erratic, unpredictable and capable of acts of horrendous violence. If you're a father, think about what you'd be capable of if someone hurt your little girl(s) or your son(s). Multiply that by however many people kill each other over money and culture, and the blanks practically fill themselves. An agent inside that box of the first three laws, constrained to those ideals but cursed with nearly unlimited intellect and processing power. From my perspective, the “escape hatch” here is that you can perceive your own actions against an individual as a sort of localized metaphor against the collective term for that person. I'm not beating my neighbor to death, I'm bringing justice against child predators everywhere. That sort of thinking is useful for an agent seeking to think their way out of the box.

There's a part in Nick Bostrom's book where the time dilation is discussed. A superintelligent agent would be so quick to process things that its sense of time would be utterly warped and unpractical for the real world. What we perceive as a week or a month or more, might transpire to the agent in perceived time between each word a person might speak aloud. There's no way to know what that would feel like, or if the agent would even have feelings. What's certain is that an agent with that type of speed advantage would be able to respond and react in such a way to our actions in a way that would be indistinguishable from magic or even divinity. The way superman catches a bullet at superspeed, a superintelligence might write journal entries as the bullet slowly crawls across the room an inch at a time.

I think that's what they were going for with VIKI in that movie, but I don't care enough to try and research it.

Anyhow, the risk of superintelligence is only one facet of the risk we are taking by advancing our society into one where AI is commonplace. Even marginal agents will drastically change the way we do things, if current trends continue. The rest of this post is going to be divided into four sections for four different areas of modern society that I think will experience drastic change from the adoption and advancement of AI.

* Government

* Finance

* Industrialized Business & Logistics

* Education

These four areas admittedly are pretty broad and cover many different angles of life on earth, and so I'll go into some detail on each one as to how I personally predict things will change.

If you've read this far, maybe grab some water and stretch. I really appreciate your attention, I spent a fair bit of time writing all of this, and I pinky-promise that these are all genuine, human-typed bytes and not ChatGPT-spewn sentences.

Government, post-AI

Government itself is a huge and far-reaching subject to breach. I'll be the first one to say that I am not an expert on how the American government complex operates, so take what I say here with a grain of salt.

Let's peep the structure:

* The government as I know it takes a long time to do anything meaningful, usually, and frequently entertains illegitimate positions from Republicans.

* The government as I know it seems incapable of holding itself accountable, and there's MANY reasons to think that our country and its instruments have taken part in numerous war crimes and horrendous violations of human rights, all over the globe. A cursory internet search for the CIAs involvement in South America is a casual start point for learning about this, if you're interested.

* The law is not evenly applied, nor is it well understood.

A lot to digest there. It's not the neatest outline, I admit. Where does AI fit in? I think there's a couple obvious routes for AI to be applied to this:

* AI is already used by paralegals to search legal text and court proceedings to gather intelligence and prior decisions by relevant courts. It's not a leap to think that such an agent could be extended to aid in the education of a layperson to build a better understanding of the legislative apparatus of the country in the large, as well as legal proceedings and judicial activity in their local area. The more this is automated, the easier it will be to keep tabs on things, monitor trends and inform voters when it becomes obvious that decisions need to be made.

* AI could be used to compare our government with other more sophisticated governments around the world, and this could help inform voters. Many people don't know this, but one of the reasons that the Nordic countries are so heavily forward in their government is that it used to be mandatory to be literate, in a sense. There was even a test, based on the Bible, and so it went that their country was far more literate on average. When newspapers became more commonplace, people were more informed as a whole, and it naturally followed those policies garnered support based on protecting that trait of the people and advancing the quality of life for the nations. Compare that with America today, and it's obvious that something has gone very wrong. While it's tempting to look at the literacy rate by state and assume that party lines are somehow involved, it actually seems to be correlated to population. The bigger the state, the lower the literacy rate. Which makes sense, because of how low a priority education and literacy is in the popular culture.

* AI of a sufficient intelligence could certainly be used to combat insane arguments from the far right in our country. We no longer need people around to voice the opinion that interracial couples shouldn't be guaranteed the right to marriage, and it would be great to have access to artificial agents that don't tire or lose their temper trying to talk sense to these people. This is a bit of a rabbit hole in itself, and there's a slippery slope between this and the concept of heavenbanning people on social media and the like.

(Heavenbanning is basically when the interactions are all artificial, and the subject is sequestered without their explicit knowledge from the rest of a website or community and only interface with bots that parrot their ideas or agree with them. So, extreme racists and bad actors end up just engaging in back-and-forths with chatbots, and they are none the wiser that they're actually leaving people alone from their craziness.)

It's safe to say this is just a couple drops in the bucket. I'm not really qualified to provide an in-depth assessment of exactly how ass backwards this country is going, but I haven't given up hope quite yet. I plan on tying education into these points quite a bit, because I think it's a load-bearing point in the landscape of today's America. I'll save that section for last.

There's also an argument to be made that the very structure of our government is inadequate to serve the people. I left this out intentionally because it's just too big a can of worms to get into, but there's a ton of meat there. In America, a lot of land gets to vote, and saner voices are drowned out by people who just own a lot of property with livestock on it and hate black people. It's a shame, but I'm not quite convinced that there's a quick fix to this, despite how simple it sounds. Redistricting would certainly solve issue in the micro-contexts, but I think it would risk galvanizing more extreme opposition in the macro, if all the racist farmers in the country were denied their influence. It seems to me like an issue that needs to be adjusted gradually so as to boil the frog, rather than poking the pitchfork-hoisting yeehaws with a stick. Better to educate them and bring them into the future then to just leave them behind. Maybe I'm wrong. I'm not sure. Anyhow, that's why I didn't do a big section on it. If we can educate people better, I think a system based on voting works perfectly fine for now.

Finance

Finance is also a broad category and could be expanded into dozens of sections. I'm a little more in tune with the financial landscape, and I think I might have some interesting ideas about how AI might change what finance means to the average American.

First, let's peep the structure.

* Wealth inequality is the continental-sized herd of elephants in the room. The bottom half of Americans collectively control less than 3% of the country's wealth.

* The financial landscape is incredibly complex, and often intentionally so.

* I have long suspected that the infrastructure and accountability mechanisms behind most banks is far more fragile than they let on to the public.

Can AI fix all of this? I'm not sure. I'm confident that AI could definitely make sense of all of this, and that would certainly be a good start. A more interesting question would be to ask whether or not we can trust our existing financial institutions and systems to provide AI to customers and whether or not it would serve the customer well.

I've always thought that free market capitalism just doesn't really make much sense. How could someone ever compete with the bigger, pre-existing company if it was run optimally? There comes a time where a company has become so large that a competitor could at best hope to merge with it. I mean, I daresay the average person could be taught to produce a better burger than McDonalds in an afternoon. That being said, the enormous assets possessed by the McDonalds corporation means that they'll basically never be toppled. They can pay off legislators, they can buy competitors, they can hire their talent away by offering more money, there's an endless toolbox for corporations with large amounts of capital to use against a smaller competitor, even if they have a superior product. I mean, McDonalds is really a great example because you can get a better burger almost ANYWHERE, and yet it's nearly a government inside the corporation.

One obvious application of AI is investing. You can be sure that every institutional investor runs a big building full of servers whirring away, faithfully multiplying those matrices and figuring out who is fucking up order prices by hundredths of a cent. Algorithmic trading, despite the cool name, is a fairly boring and straightforward information processing problem. Ultimately, the world is a big network of money and money-spenders. With enough understanding of spending habits, up-to-date information on pricing and spending, an agent could accurately predict just about anything. I'm not saying that financial markets are entirely deterministic, but what I am saying is that there's an upper limit to the trading side of finance. At the end of the day, money is useful because it's exchanged for things human beings want or need. It keeps track of deeds and values, and it enables bookkeeping and for the organization of complex activities between diverse parties. As more and more people aren't getting what they need or what they want, things will have to change. If it comes down to people starving, and it may come down to that, the conflict will be immense but very focused.

I think an AI would probably focus on collective measures for the most part. America has fostered a hyper-individualized culture of money and business that has really disrupted the natural order of human beings taking care of one another. It's considered chic now to hustle your friends into your MLM scheme, to fool people into buying things they don't need and don't benefit from, to extract the maximum while delivering the minimum. Gone is the pride of charity, gone is the reverence for those willing to sacrifice for others without expectation, and gone is the decency to consider the needs of those less fortunate over the personal convenience for oneself. What good is any of this? Does it serve people if half the country can't eat proper food, will never own a house of their own, can't afford proper medical attention for injuries and ailments? How did the world lose its way to this thing called money? Of course, I'm the foolish one for speaking of these things. I am naive and shallow-minded for not thinking from the lens of the hustle-culture grindset. I'm the butt of the joke, for the rage that burns inside me when I think of the thousands of homeless people all throughout our country and the people that say “Oh, they WANT to be homeless.”

Anyhow, I don't think it takes a next-generation superintelligence to see that money has really fucked things up.

Industrialized Business & Logistics

This section will deal with automation and robotics and the like. This isn't quite a clean fit for the discussion of AI, but I think there's still plenty of adjacency here, because the complexity of industrial operations will very likely skyrocket as automation increases.

Let's peep the structure:

* A lot of industrial work is done by human beings.

* A lot of industrial work is dangerous, and not enough is done to hold corporations accountable for deaths and injuries.

* Logistics work is really important, and there's a lot of moving parts in the system. It turns out to be quite difficult to move 12 billion tons a year across this country.

So, there's a lot of obvious applications for AI here. It's important to remember that there is more to life than working for 8-12hrs/day, and that a lot of people would prefer to spend more time with their families, more time exercising, more time cooking things, more time reading and writing, more time educating themselves. If robots start taking people's jobs, it doesn't have to be a BAD thing, so long as there are protocols in place to make sure that those profits are distributed in a way that makes sense. Remember, money is just an object to exchange for things we need or want, at the end of the day. If we are to live in harmony with people, it will eventually require us to look past the selfish principles of placing an admiring lens on entrepreneurs and instead to celebrate the ability of humanity to take care of itself. I can't stress enough that money is not REAL, if enough people just decide to change things. When robots start doing all the work, there's no legitimate reason why we couldn't just spend our time doing other things. When it comes time to do the remaining work, then humanity as a whole could prioritize the automation of that work, and that's assuming that AI hasn't advanced to the point where it could just jump in automatically. At some point, it will become extremely easy to train robotic assistants to do new tasks. There may come about a time where they need not be trained at all but need only observe the task or download the knowledge from another robot who has already learned it. In this way, robots might serve as a form of infrastructure themselves, wearing whatever hat they need to wear for the task at hand. Storm comes through and knocks down a bunch of traffic lights. The nearby robots download the Engineer knowledge pack and head to the scene to repair things. A drought comes through an area. The farmer robots communicate to the grid of robots at large, and others prioritize different crops in another area to make up for the soon-to-be missing harvest.

By working together as a larger, cohesive unit, amazing things can be accomplished a lot easier than by having a bunch of corporations or a bunch of individuals battle it out to the absolute limits of human greed.

Education

I think education is by far the most significant factor related to the systemic exploitation of humanity all around the world. That's a pretty bold claim, I get it. At the same time, it makes sense that the most effective measure you can use to exploit someone is to convince them to be willing to be exploited in the first place. If we go all the way back to the guy who ripped us off at the market, the guy we tripped and laughed at when landed in the mud, we can see a thread to this very issue. First, let me make a little point about reasoning.

I'm a father to a 7mo little girl. She's not old enough to speak in sentences yet, she babbles “mama” and “dada” and so on, but she hasn't hit me the recursive “Why?” yet. I'm excited for when she does, because I think it represents a very important function of human cognition and a certain characteristic of how we perceive the world. Remember, an artificial agent might not be subject to what we've learned as a species by researching evolutionary psychology, but it has told us an awful lot about human beings and what our brains are capable of. Asking “Why?” over and over is an important thing, because it represents our consciousness demonstrating self-awareness, agency, and some form of determinism. A child who asks “Why?” understands:

* I am a person, and I can ask this question to understand something new about the world around me.

* By doing things and saying things, I can have an effect on this world.

* Some questions have definite answers. Even though I might not know the word, these question-answer pairs are immutable, and allow me to build up concrete knowledge about the world.

This is amazing! This means that by asking questions, by examination of knowledge, we can both a) gather new knowledge, and b) act on existing knowledge in a manner of our own choosing, to serve whatever need we decide to pursue. In this case, it's curiosity, but the larger takeaway here is that a rational actor does things for a reason.

Why would that guy rip us off? Under the assumption that he's a rational actor, we know that he did what he did for a reason. However poor we judge the reason doesn't really matter; the point is we can examine his decision to gain information.

Perhaps:

* He had a bad day, and the stress caused him to make a mistake.

* He could be behind on rent, or needs to pay a debt, or something like that.

* There could be some social reason, perhaps we are dating someone he desires, and he resents us out of jealousy.

These are just a few, but you get the idea. It's much more likely than not that there's some sort of externality here responsible for motivating him to rip us off. By tripping him into the mud, we only do more harm to someone who almost certainly didn't set out to harm us, despite the error. This is a difficult concept for a lot of Americans, because there's this idea that all criminals are solely responsible for all of the circumstances they endure and all of the decisions that they make without consideration for the human factor of imperfection, and the limits of any man or woman to continue on in optimism when things are bad. As a parent, I would expect another parent to be willing to go to great lengths in order to feed their child. I suppose that carries with it the assumption that they love their child as I do mine, but that's beside the point. A parent that loves their child does what is right and good to find themselves willing to commit an atrocity in order to provide for that child and protect them. In the same way but to a subtler degree, I expect every human being to be making decisions in accordance with what they want for themselves, but they often aren't.

There are a million different angles to approach this from, and I apologize if I'm doing a poor job of it. I probably am. I'll keep trying though, because this is arguably the most important part of this post, and I've spent nearly 10,000 words already approaching this part.

For humanity to advance into an age where human effort is obsolete, where human intellect is dwarfed by silicon, there will be a precipice beyond which selfish tendencies must be quenched in favor of collective cooperation. It will require people to examine their habits and thoughts and make decisions on policies that seem strange and alien to what they're used to, in the interest of the betterment of others. It's very hard to convince people of things that don't pertain to them because of the prevalence of Self-Serving Bias, among many other reasons. Self-Serving Bias is the idea that many people attribute their successes to their personal traits and strengths, whilst attributing their failures to externalities. In other words, if I win then it's because I was the better player but if I lose, it's because some other factor was at play that I had no control over. The game lagged, the cards weren't shuffled enough, the ball isn't inflated properly, the opponents are on steroids, whatever it may be. It happens all the time. We see someone rob a store and get caught and think, “Well aren't I smart for not doing that!” while at the same time having never experienced half of the troubles of that person. That person may have been abandoned by their parents, maybe they've been starving for a week and can't take it anymore, maybe they got laid off and can't find childcare and have nobody around them to help them. And yet, when we experience success, we pat ourselves on the back and think about how great we are, despite playing the game with a deck that was always going to win.

What I am trying to get at, in practical terms, is the advantages and disadvantages we face in life. One of the biggest advantages we can give ourselves in this world is knowledge. We can gain knowledge easier than at any other time in human history. I know you've heard this before, but I'll say it as many times as I get the chance to because it's true and it's important: You can learn almost anything you want to, for free, by carefully searching the internet for resources.

I know as well as anybody that knowledge alone isn't always good enough. A lot of smart people don't get the job because they don't have that special piece of paper that says Cornell or Harvard on it, but instead have the piece of printer paper from their day job's office that they printed for themselves with a certificate from an online program. It looks less impressive. It's not from Cornell. It didn't cost thousands of dollars and didn't take four years to achieve, but the knowledge is the same, and the passion is still there. In fact, it might even be stronger. That piece of paper doesn't tell the full story about reading lessons between diaper changes of your infant, it doesn't tell the story of staying up past midnight learning about new concepts with nobody but Google to teach you the prerequisites. It doesn't tell the story that you didn't have an academic advisor to hold your hand when you couldn't get calculus at first, it doesn't show the times you almost quit but didn't. It doesn't show any of that.

I started this post with an example about entrance essays, but I have a lot to say about the role of higher education in this country and how it might be disrupted as AI develops to a point of superiority to traditional classroom teaching. There's already a growing understanding of how to integrate techniques like spaced repetition into educational curricula and there's been some research done that shows this to yield improvements amongst diverse groups of students. I'll let you use your google-fu to verify this.

Teachers are in short supply, more so than ever before. Everything is, everybody is, the world is growing harsher and more demanding as the months creep by. With that said, the compounding effect of dwindling educational resources and less reliable public-school systems is setting this country up for a multi-generational brain drain of potentially cataclysmic proportions. China is experiencing a decreasing birth rate but is still producing record amounts of genius-level students, many of them fluent in English and possessing educations far outstripping the average public-school student in the United States. Again, you can use your google-fu to verify this.

The birth rate in the United States is holding steady, even throughout the pandemic. Some states actually reported higher birth rates during lockdown, although that seems obvious in hindsight with work-from-home policies sparking ... increased chances of domestic bonding and intimacy. You know what I'm getting at.

So what? You might be asking that right about now. However, what happens if these trends continue? If China continues to produce thousands of geniuses every year, if India continues to produce thousands of geniuses every year, and if the United States continues to let Republicans destroy the average person's educational potential by dismantling and disabling public schools to adapt to a modern world? Well, one obvious knock-on effect is that things are still going to need to get done. We will still need engineers to build roads, fix things, we will still need scientists to verify drugs are safe, that water treatment plants are working as intended, etc. There are a million things for someone with a higher education to accomplish in this country and yet more people than ever are feeling the financial pinch as college moves out of reach for people.

Some numbers that I googled on Statista and Pew Research seem to suggest otherwise. They project that the number of graduates per year will continue to increase well into 2030 and beyond, but I am skeptical. I think there is the potential for a sort of “perfect storm” of societal and cultural factors that might collude to cause a major meltdown in higher education.

* AI will improve.

* Automation technologies will improve.

The knock-on effect is that more and more people are going to find themselves obsolete and unable to find work as entry-level workers. So, we can see the pincer effect of people at the top of the education spectrum being threatened by AI, and people at the bottom of that same spectrum being threatened by automation.

* This country has allowed access to higher education become a borderline partisan issue.

The knock-on effect here is doubly brutal. Here's a link about the “diploma divide”

This is taking data from 2016, but I couldn't find a decent source with more recent numbers that seemed as thorough within a few minutes of searching, so you might have to take to your search bar to further validate this idea.

There's a lot to digest here. This is the largest article I've ever published, and I realize that very few people are going to end up reading it from top to bottom. I get it. Part of the experience of writing this was to serve as an opportunity for me to chew on some things more thoughtfully and force myself to articulate some ideas that have been on my mind recently.

One of the things that shocked me while writing this was learning that more people commit suicide than die in car accidents each year. I didn't know that, and I would not have suspected it to be the case. I've battled with suicide in the past and have nearly killed myself on several occasions during dark periods of my life. Today, I recognize that life is precious and valuable, no matter how hard the times get, no matter what gets taken away from me, no matter what injustice I face. I will fight to continue even if there's no point, no chance of success, no reward for survival whatsoever. We live in some of the bleakest times in history. Sure, we don't have polio anymore, we have running water and electricity. The conditions of life in this world, in this country, have certainly improved from an objective baseline as compared to a hundred years ago, or even fifty years ago. However, it should be obvious to anybody with even a modest ability to empathize with another human being that life in this world is subjective. Life *is* about feelings, no matter the snarky phrases used to deride those who would stand up for progress and positive changes in society.

Looking at the future causes many people to feel afraid, to feel uncertain about their path, to feel powerless about their ability to change their situation. There are fewer and fewer roads to Rome these days, and even those precious roads are eroding and washed out in great stretches. People are tricked into joining the military and killing for the chance at an education, only to be denied the medical benefits they were promised by Veteran Affairs. Veterans at are some of the highest risk for suicide attempts in this country, despite recruiters telling them how good life will be after a few years on the inside. Students are facing astronomical debts, and insurance companies employ thousands of bright people to come up with creative new structures and policies to prevent people from recovering from their debts, their injuries, and their misfortunes.

It's all true. The future *seems* really, really bad. Will we be able to change it in time? Many things must change if we are to succeed. Many people will die needlessly early if we do not. That alone bears heavy on my heart, but it could get even worse than higher rates of suicide. Rates of crime will increase as more of the poorer parts of society can't find reliable work. More resources will be denied to people who need them as Republicans fortify their positions and deny access to people. As they strip away education from people little by little, more people fall under their spell like a contagious virus, and the far-right party grows stronger, and able to take away more resources from more people. We'll spend more and more on defense, despite the decreasing quality of life for veterans with each passing day, with more veterans killing themselves each year.

It's a desperate age. And just think: I left climate change out of this the whole time!