On the Use of AI

This article is going to be controversial. Hey, it's me. I'm controversial just breathing. I'm going to talk about AI.

Before I go any further, I'll make this promise. I will never use AI to generate the text of my books. I write each and every letter with my own little fingers. That has always been true and it will remain true through my final book. Well, unless some are published posthumously. I'm not responsible for what happens at that point. Note that I have a section further down devoted to book covers.

AI Is Coming For Our Jobs

AI and robotics will eventually replace most of us. I believe this is a given. We as a society need to start having discussions on what to do about it. With our current plan, it's going to be a catastrophe.

But it doesn't need to be. We as a society can say, sure, AI is great, but we're not going to let the mega-rich keep the windfall. It belongs to all of humanity. We can tax accordingly and provide a very healthy Universal Basic Income for everyone.

However, in the short term, this isn't an immediate risk. AI isn't coming for your job. People using AI are coming for your job.

The Genie is Out of the Bottle

Face it: the genie is out of hte bottle, and we're not going to put it back. Frankly, I don't think we should try. If a machine can do something a human is currently doing, I think that's great. It frees the human to do things that are more meaningful.

We just need to deal with the impacts that has. UBI, anyone?

We could legislate against it. Congress could ban AI in the United States. But that would be a huge mistake. We may as well declare ourselves a third-world country, because you can bet China and Russia won't pass the same bans. Why would we give them such a huge advantage?

How I Use AI on a Daily Basis

By day, I am still a computer programmer. I imagine all of you would love if I wrote full time. I certainly would. But I have bills, and the amount I get paid per book doesn't remotely come close to what I earn from the day job.

In my day job, AI is built into the tools we use. Every developer I'ved talked to is using GitHub copilot integrated into their tools automatically. What that means is I can write a comment, and Copilot frequently will suggest the entire implementation code. Sometimes it's good; sometimes it's not. But it happens seamlessly.

This is absolutely what I mean when I say people using AI are coming for your job. The productivity improvement is huge. As I want my employer to succeed so they can pay me more money and not have future layoffs, I'm happy to increase productivity.

Copilot will also help me comment code. It's quite smart. Or I have started writing something like "switch(foo)", and Copilot knows what data type foo is, what the possible values are, and it will then fill in all the code for all the case statements. Freaking impressie.

Microsoft recently added Copilot directly to word. For me, that was like last week. I do not use it, and I do not intend to use it. I write my own words.

As of last week, I pay for a monthly subscription to ChatGPT (OpenAI). It's $20 a month, which makes it as expensive as any other tool I use in my job. For the day job, I use ChatGPT constantly. I've nearly entirely stopped going to Google to answer programming questions.

Last week, I cut and pasted a piece of code and asked ChatGPT, "I wrote this. Is there a more efficient way to do this?"

And ChatGPT lied. It told me to use a standard library method that doesn't exist. Sigh. So the answers aren't perfect. However, ChatGPT also gave me 3 other ways to do the same thing, telling me which were more flexible and which were more efficient, and the answers for that were good and accurate. I ended up changing the code to one ChatGPT recommended. It was more concise and probably about the same performance.

I have told ChatGPT, "I am using this library from Java, and I'm trying to do this. How do I do it?" ChatGPT isn't always right, and I have to check the answers myself. The biggest problems are when referencing a library that has gone through changes, and he provides answers that no longer work.

I don't know how many hours and hours of time ChatGPT has saved me.

How about when I write?

Because I was using ChatGPT heavily for my day job, I began using it for writing. Not to write, but to help with research. I'm currently working on the 3rd Tri-Vega novel.

Did I just say the third Tri-Vega novel. Yes. Consider that a teaser for the status of the second novel. Tri-Vega 3 is told from the perspective of a French woman named Louise. Anyway.

I've been doing heavy research. I don't speak French. I don't know French culture very well. I've been to France, barely. I'm trying to get things right. And Louise does some traveling. She'll end up in Santa Cruz. She gets to go for a ride or two in a helicopter. What model? Well, isn't that an interesting question? Well, if you like aircraft, anyway.

ChatGPT isn't writing a single word in my books, but it is helping me to do this research. And it's far better at it than asking Google. It's not always right. Earlier today, it told me the top of the controlled airspace over a particular airport was 7000 feet. That figure didn't make sense to me, so I looked it up myself and discover it's 8000 feet.

So, I use ChatGPT much the way we all used Google. I find it gives me high quality answers, but I still have to check the answers.

I also had two Japanese characters (very very minor characters) in this novel. I used my standard tool for generating names, but I couldn't remember if I had written the names down in Western style or Japanese style. In the west, we use our given names first and our family names last. In Japan, they reverse that. In the Japanese style, I am Roseau Robin. Said as an honorific, I would be called Roseau-san. (Or Robin-san only with people very close to me.) Or so I understand it, anyway.

So I asked ChatGPT, which order are these named? ChatGPT understood what I was asking but told me of the four name parts (2 given names, 2 family names), only one of the 4 was a common Japanese name. Yikes.

I ended up renaming both characters based on recommendations from the AI. I don't know if the recommendations are any better, but I also indicated "(western order)" when I stored them in my Notes document.

Remember the helicopters? Well, Louise and Libby (the MC from Book 2) will be traveling. A lot. And everyone is pushing Libby to do more, more, more. They decide that giving her a private plane makes economic sense.

Sucks to have vampire-level funds. When you have 400 years to accrue wealth...

So I just looked at small business jets. I looked at layouts. I looked at ranges. Google research. And without naming a specific jet, I came to some decisions.

Well...

I sucked.

There are some Norwegian women in 2 and 3 that are going to learn to fly this airplane. And ChatGPT told me something that made me say, "oh, oh".

The jet I was picking probably requires 15 years of flight experience before anyone is going to let you fly it.

I'm not to the point we're using Libby's friends as pilots. For now, they're using professional pilots. But if Louise's book covers the same kind of timeframe as I spent for Book 1 (which was what? 15 years or something), then yes, they'll be flying the jets.

But they won't be Gulfstream-500s. :-)

Anyway, that's a lot of words, but I wanted to give you an idea of how I use AI for writing. No, it doesn't generate my words. It's strictly a very interesting research tool.

If you aren't using AI, you may want to start. ChatGPT is free for moderate usage. Go ask it something.

Environmental Issues

AI consumes significant power. The training of AI takes huge amounts of power. All the major players are building major data centers to support their AI systems. Tesla, Microsoft, OpenAI, and Google are all major players, and they're building major data centers. I don't know how many of those will rely on green energy.

AI queries also are estimated to consume more power (I've heard 10 times as much power) as a typical Google search. But I'm not worried about that, and here's why.

If I ask ChatGPT a question, it takes about 30 seconds, and I get a high, high quality answer. If I ask Google a question, it takes about 3 seconds, and I get a list of web sites. I then have to start scrolling through those websites. I read some of them. I try to find the right answers. Sometimes I get lucky, and the first website listed has what I need. Far more often, if I need help with a question like this, the answer is a lot harder to find than the top link Google hands me.

  1. What is my time worth?
  2. What are the power costs of using my computer to browse from link to link?
  3. What are the power costs on each of those sites when I visit them?

I don't have any way to measure it, but I believe that the power consumption, on average, works out to be about even. Google does it for less power, but then I have to take time and burn electricity to browse through the information. ChatGPT gives me an answer, and I might have to verify it, but it gets me a lot closer.

Which really, when you get down to it, consumes more power? I don't know.

But we can't entirely ignore this, and it's a fair concern. I hope the major AI datacenters are contracting for green power, but I bet they aren't all doing so.

The Ethics of How AI Trains

I understand the concerns. AI read the internet. Copyright violations? Maybe. I don't know. I understand this freaks some people out. I think I understand why.

Frankly, I have a bigger concern with with the environmental issues, but I wrote about that above.

So I don't know. I'm going to acknowledge there are legitimate concerns, but I'm not going to weigh in more than this. I'll leave that to smarter people than me, too. (And no, that doesn't mean random people on social media.)

Book Covers

As of February, 2025, none of my book covers have knowingly included AI-generated art. I have exactly one cover for a future novella for which I did use AI to generate the image, and I have an image from some time ago that is just begging for a novel to be written.

I pay artists for a very small number of my covers. Most of my covers begin with an image we acquire from Shutterstock and then edit. Our edits are minor. Recrop the image and then possibly expand the size to be the proper aspect ratio and provide an uncluttered area for titles. Add the titles. For a few, we've spent significant time trying to figure out how to get the titles to look readable. This happens when the background is cluttered and there's no clear place to put them.

A year or two ago, maybe a bit more, Photoshop added the ability to do a generative stretch of the image size. That is, we could start with a square image and tell it to stretch the image, using generative AI to do so. To date, we've used that feature only to blend the resize nicely. It's still AI, and it's the same tool a graphics artist is going to use.

I have not decided how I feel on the use of AI to produce my cover art. I believe artists should be compensated fairly. At the same time, I do not believe any artist is owed my business, especially as I do not make a significant amount of money per book.

And then there are the ethics of how the AI models were trained. After all, they did a deep scour of the internet and trained on the art they could find. Is that performing a copyright violation?

I don't know. Is it any different than if an artist does the same thing herself? If I could paint, and I looked at someone else's work for inspiration, is that a copyright violation? How is the answer any different if AI does it?

I understand the answer becomes far easier to say, "Yes, it's a problem" if the AI is used to copy someone's style, and there have been cases of AI doing exactly that.

The AI cover I have right now is a jade lovebird. It's quite a striking image. If I could have found a similar image on Shutterstock, I would have used it.

I don't know how I feel. So for now, I suspect I will continue to use generative AI to help me resize my cover art for the purposes outlined above. I may very, very rarely use an entire AI-generated image, but not in places where I would have hired a human artist. I only do that for the books that are most important to me.

I'm going to let society mull the ethical issues more. I'm not an ethicist. I'll let smarter people figure things out.