Artificial General Intelligence: a Christian viewpoint
by Valentin Slepukhin | 2,700 words | Reading time 8–16 minutes
This post is an opinion piece by Valentin Slepukhin. As it presents his views on a rather controversial subject, this is a good time to repeat the usual reminder that we publish posts by multiple authors who have different, sometimes contradictory views that do not represent those of Effective Altruism for Christians.
Foreword: Calling this opinion piece a Christian viewpoint I do not mean that every Christian should believe what is presented here. Rather, it is my personal opinion that stems from my Christian mindset. As a Christian, you might disagree with me and find flaws in my arguments, and I will be grateful if you share them.
Introduction
The topic of Artificial General Intelligence (AGI) has recently been a subject of quite intense discussion. There are slightly varying definitions of AGI, so to be specific, I will use the following: Artificial Intelligence that can perform any cognitive task on the same level or better than any human (which also means faster and cheaper). I mean tasks where performance can be measured—not just writing music, but writing award-winning music, not just doing research, but solving a particular problem and getting a paper admitted to a journal. Not just running a company, but outperforming competitors. If such AI appears one day, it will dramatically change our lives.
The first and most obvious difference would be the absence of jobs except manual labour and service jobs: since AGI would perform any cognitive task better than humans, we would not need people to work on them. Ironically, we used to picture the future as a place where dirty manual labour will belong to robots and creative cognitive tasks to humans—and AGI could make it quite the opposite, with a human babysitter still cheaper than a robot, at least initially, but human data scientist way more expensive than running AGI on a cloud.
The second change would be more profound. Since AGI is doing cognitive tasks better than humans, essentially from its invention all the design of experiments, data analysis, and the invention of new technologies would be its job instead of human researchers. People would still perform experiments and collect data, but they would be essentially like undergrad students in the lab led by AGI, until replaced by robots as well. AGI would be our final invention.
There are only three options.
First, we can do everything right, use AGI wisely and benefit from whatever new inventions, medicine or economic reforms it suggests.
Second, we may make an irreversible mistake somewhere, trust AGI too much, and get into a situation where AGI is pursuing its own goals instead of doing exactly what we wanted it to. In this scenario, AGI would already have full control over the automatized world, so it couldn’t be simply turned off (see section 4 below). This is essentially a doomsday scenario, where humanity loses control of its future to AGI.
Third, we may simply decide to never build it.
In any case, it seems like AGI is something like a point of no return for humanity.
Is AGI something realistic? What should be my reaction as a Christian to the idea of AGI? I will consider some statements about AGI that I commonly hear from Christians, and after that I will discuss the implications of these ideas for our actions.
Most common viewpoints
“ Machines cannot be conscious and they do not have a soul. They will never be able to do some activities that only humans can do. Therefore AGI is not possible.”
My opinion:
The concept of AGI does not require a machine to have a soul, just to perform certain actions better than humans. The list of things AI can do better than humans is constantly expanding. Chess was considered a sophisticated game, requiring a high level of creativity, almost an art. In 1998 Deep Blue (back then not even a neural network) beat chess champion Garry Kasparov. The same happened with Go in 2016. While we do not have music or poetry competitions that AI wins (yet), one may already easily appreciate their proficiency by using ChatGPT or another AI tool to create a poem upon request. These are examples of creative tasks that one could believe should require a soul - but apparently do not. It is not clear to me why tasks like doing scientific research or programming should require a soul. If you still think that there is a task X that people can do and machine will never be capable of, I suggest you write it down and observe the progress of AI in this direction.
“It will not happen in our lifetime, if it happens at all.”
My opinion:
It is almost impossible to predict the research timelines far ahead. Check out almost any sci-fi book or movie from the middle of the 20th century. You would see humanity conquering space and traveling across the galaxy, but still with the bulky computers of the 1950s and with biotechnology not much more advanced than in the authors' time. Bearing this fact in mind, any statement about when (and if) AGI will be achieved is highly speculative.
However, it seems we cannot say with high certainty that AGI will not be here in our lifetime (let’s say till 2050). Looking at IT progress in the last 25 years, it seems really hard to predict what AI will not be able to do in the next 25 years - so it's pretty hard to eliminate the possibility of AGI. Even more radically, how can we eliminate the chance that one of the very next models (GPT-5, Claude-4 or something like that) will reach AGI level? Looking at the performance of GPT models as the function of their size and simply continuing the power law behavior, it seems totally plausible that GPT-5 or GPT-6 will result in AI capable of being a computer scientist or data scientist on the level of top researchers in the field, like those who created GPT itself. Then we would get recursive self-improvement, such that every job related to the design of new, better AI will be done by AI, quickly leading to better versions until we reach AGI.
Of course, this is merely a handwavy prediction. It is quite likely that there will be a complicated obstacle that is hard to bypass and that an AI winter will start again. However, the fact that we cannot confidently exclude the chance of AGI even in the very near-term future (up to a few months, when the next state-of-art model will be released) indicates that we should not neglect the chance of AGI arriving in our lifetime.
“God will not allow this to happen”
My opinion:
This statement can be true or it can be false. If it is true, however, what should be our role in it? If God will not allow AGI to occur, does it mean that AGI is against God’s will? In that case, we certainly should not help build it. (It is also worth mentioning that we can say that God allows something even if it is not His will, e.g. when He allows us to sin. However, if God does not allow something, it is certainly against His will).
We can consider the following example. One may guess that a full-scale nuclear war with a death toll exceeding all previous wars put together (if not leading to the extinction of humanity altogether) would be against God’s will. Stanislav Petrov disobeyed instructions, potentially preventing a full-scale nuclear war, as commemorated on Petrov Day. He could have just followed his orders and hoped that nothing bad would result from his actions. I am pretty sure that by taking responsibility and not following the instructions he actually followed God’s will. In the same sense, if we expect that God’s will is to protect us from AGI, we would better be on His side.
“AGI will be good almost certainly”
My opinion:
There are two arguments, why this is most likely not the case, one from a secular and one from a Christian point of view.
From a secular point of view treats AGI very cautiously. There is a whole field of AI alignment asking how we can be sure that AI is doing what we actually want it to. When a system that is not very smart is built with a fault or without taking all the possible circumstances into account, it may lead to a catastrophe that is horrible but still survivable, such as the Chernobyl incident. An AGI that can do everything better than humans, if it would somehow go astray, could actually be unstoppable since humans would not be able to outsmart it and gain back control. While it may seem like something from a science fiction story, the top researchers and CEOs (including OpenAI and Google DeepMind) warn us about the severity of the risk, comparing it with the nuclear war. Thus, the risk is definitely serious, and a safe AGI is completely not guaranteed.
From the Christian point of view, we should doubt the good-by-default AGI option even more. If we look at the promises associated with AGI, from the end of boring jobs via robotization to the end of death via progress in biotechnology, they seems like coming back to paradise. Look at the verses in Genesis after Adam and Eve ate the forbidden fruit:
To the woman he said, “I will greatly increase your pangs in childbearing; in pain you shall bring forth children, yet your desire shall be for your husband, and he shall rule over you.” And to the man he said, “Because you have listened to the voice of your wife, and have eaten of the tree about which I commanded you, ‘You shall not eat of it,’ cursed is the ground because of you; in toil you shall eat of it all the days of your life; thorns and thistles it shall bring forth for you; and you shall eat the plants of the field. By the sweat of your face you shall eat bread until you return to the ground, for out of it you were taken; you are dust, and to dust you shall return.”
Basically, many people are expecting AI progress to undo these punishments. It is true that we should return to paradise—but we should do it via Jesus Christ, not via AGI. The idea that AI progress will bring us to Paradise does not seem very consistent with Christianity. Progress can make our lives a little bit better, but returning to Paradise is something different. Thus, the scenario of beneficial AGI building paradise on Earth does not seem as something consistent with a Christian mindset, at least to me.
“All that is the end times before the Antichrist is coming”
My opinion:
The Antichrist (and the Second Coming of Christ) have been predicted multiple times (even the apostles themselves were expecting Christ to come in their lifetime). Nero, Martin Luther, Peter the Great, Lenin, and many others were considered to be the Antichrist. As we can see, all those predictions were wrong. Does it mean that any such prediction is wrong? It does not - we know from the New Testament that Jesus Christ will eventually return, and we know some details of how will it happen. So at some point someone predicting the coming of the Antichrist will be right. Can it be now? Can it be connected to AGI? In principle, AGI seems way more similar to Antichrist than anything before. The promises of eternal life, food for everyone, the knowledge of everything etc. are way closer to what the devil suggested to Christ when tempting Him in the desert and what he promised to Eve when tempting her than anything that potential antichrists could have done before. It is really hard to imagine a utopian world of AGI, a paradise on Earth, and assume that something like this would happen before Christ comes back, not after.
What to do about it?
Faith without works is dead.
It would be a pure waste of time to write all of the above (and for you to read it) and then say that we should just live as we used to. If I am taking this seriously, what would I change in my life?
First of all, no panic. For I am persuaded that neither death nor life, nor angels nor principalities nor powers, nor things present nor things to come, nor height nor depth, nor any other created thing, shall be able to separate us from the love of God which is in Christ Jesus our Lord. The Second Coming of Jesus Christ is an event that we are all waiting for with joy despite all the tribulations that we will have to go through.
The first and obvious thing that never fails is to pray more and to ask God for guidance. If there is a quite high possibility that history is over sooner than I would expect otherwise, what should I do in the time that is left for me? What God wants me to do? Are my relationships with God and other people right? Is there anything that I procrastinate to do for years and years until it is too late? Repent, for the kingdom of heaven has come near.
The second thing is to discuss everything with other Christians. Maybe I am completely wrong, and others will correct me. Maybe together we will have more ideas about what to do. “For where two or three are gathered together in My name, I am there in the midst of them.” This is the reason I am writing this post. Hopefully, you will give me some feedback. Of course, I am trying to do it not in a way to create panic—and if you decide to spread the word, please make sure you spread it with love and hope, not with fear.
Third, whether this is the time Christ comes or not, I would better be on the right side. I think the current movement to do safer AI research is a movement in a correct direction.
I certainly support more regulation in the AI sphere, the idea of a pause, not building dangerous AI until we figure out safety, etc. Here are suggestions for a lot of ways a regular person can be helpful by reaching out, promoting ideas, etc. Moreover, as a Christian, I think I would have more impact reaching other Christians, by filling the niche that is not filled yet. This post is a first step, and the beginning of the discussion about what else can we do as Christians.
I think AI Safety research is good in a red teaming role, i.e., showing flaws of the current systems, as it makes us more cautious and hopefully attracts more attention to the safety cause. I don’t see that much benefit in trying to create different alignment techniques—I am more biased toward saying that “any AGI seems quite close to the Antichrist” so prefer not to build it at all.
Finally, I wanted to write this even though it may seem highly controversial. I personally think AGI is coming very soon, following the Metaculus prediction market, the EpochAI Direct Approach, and the reasoning from the Situational Awarenness by former OpenAI researcher Leopold Aschenbrenner. However, the general market thinks it is not soon. I think the general market is wrong (as sometimes it is—otherwise everyone would buy Bitcoin in the early 2010s, not only nerds), AGI comes much sooner, and my economic behaviour is controlled by this belief (like for example not saving for retirement, which is in 35 years for me; I also do not support highly long-term projects and prefer something near-term). I understand that it may seem arrogant to you to trust handwavy heuristics over the effective market hypothesis. If you want to rebuild your life plans based on the idea of Christ coming soon, please don’t just blindly follow what I’ve written here. I am not a prophet, just a regular sinner. Pray, read, and use your reason.
Conclusion:
The issue of AGI seems to be very important, something that can really change our lives, and maybe should motivate us to do something now. It is worth it for us as Christians to spend some time pondering this question. Any feedback is extremely welcome, either as comments here or in the EA for Christians Slack.