I don’t know. On one hand, if the crime is so bad that it otherwise warrants lifetime imprisonment…
a) maybe there is a line past which it’s deserved. I do generally view life as being something sacred and not something you should be able to take from others, but it’s a fuzzy moral question as to whether there are some acts that are so heinous that they would challenge that view. Maybe it has to be for a harm at a societal rather than personal level? Like maybe taking one person’s life isn’t a warranted punishment for them taking a single other life, but perhaps say, a Nazi has harmed not just so many people, but some essential essence of the society that keeps us happy and healthy. Maybe THAT is bad enough to merit the ultimate violation of personal rights?
b) Is the alternative THAT much better? Is condemning someone to spend the rest of their life in a tiny room with no hope of them ever getting to do something that they want much better than death? Is it really living a life? (Granted, my opinion on that point is colored by my depression. I genuinely think if things got bad enough in my life suicide would be a preferable alternative. A healthier person might have a different view.)
That said, regardless of the above considerations, there is also the issue of the permanence of the punishment not allowing for correcting mistakes. Humans aren’t infallible. Plenty of people have been wrongly convicted. If they’re merely put in prison then we can always free them if we later learn of our mistake. If we’ve already killed them… ooops…? Nothing we can do. So perhaps that issue overrides any other moral considerations.
In a broad sense, I don’t agree with the premise that technology is always good and it’s about how society chooses to use it.
Technology enables people to do things that previously weren’t possible. It gives people powers that those who don’t/can’t use the technology don’t have. It fundamentally changes the power dynamics between people. You don’t get to choose how someone else uses the technology. You have to deal with its existence.
For example, guns. Guns are a weapon that enables people to inflict violence on others very effectively without much if any athletic prowess. Previously someone who was more athletic could have power over someone weaker than them. With guns, the weaker person could be on an even playing field.
Now, guns are pretty difficult to manufacture, so an authority might be able to effectively control the availability of guns. But now lets say someone discovers a method that enabled basically anyone to make a gun cheaply in their house. Now it’s harder to stop people from getting them. It becomes more accessible, and once again this changes the potential power dynamic in society. We could all come to an agreement on how we want to use guns, but that doesn’t really matter if some guy can secretly build a gun in his garage, put it in his pocket, and just go shoot someone. The very existence of this technology has changed the nature of social reality.
Now compare that to AI. Generative AI has enabled people to produce novel media that is becoming increasingly difficult to distinguish from authentically generated media very quickly. While this is technically something that was possible before, it was far more difficult and slow. There is media in the world today that could not have existed without AI. (If only in so far as the quantity being larger means that things that wouldn’t have been made in the same time period now can be.) AI isn’t even a physical device. A computer program is essentially an idea translated into a language a computer can understand. It might be difficult to learn how to program, but anyone with a computer can do it. Anyone can learn how to write a computer virus, so now we have to live in a world where we all have to be careful of viruses. Anti-virus software changes that dynamic again, but it hasn’t changed the fact that someone can learn to write a program that gets around them. Now, AI as it works now is a bit harder to make on your own with just knowledge because it requires large quantities of data to train the models. So technologies and policies that could restrict people’s access to data could limit the availability of AI technology. But future developments may discover ways to make AI models with little or no data, at which point it would become easy for anyone to have that technology. So even if right now we put in place laws to restrict how AI companies operate so people don’t have easy access to the AI models or perhaps the AI models come built with logic that helps to identify their outputs, those laws would be meaningless if it were trivial for anyone to make their own.
Now, it’s going to be different for every different kind of technology and it’s interesting to discuss, but the root of any human decisions around the technology is the fundamental nature of what the technology is, does, and enables.