Rio’s Kirstin Thomas Adds a Book She Wrote on AI to Library’s Shelves

Offcanvas

Some text as placeholder. In real life you can have the elements you have chosen. Like, text, images, lists, etc.
Monday, August 26, 2024
Kirstin Thomas and her book cover The Ethos of Artificial Intelligence: Using AI to Promote the Good Life While Avoiding Harm

In 1986 the band Timbuk 3 released their only chart-topping song, the catchy synth-rock ditty "The Future's So Bright, I Gotta Wear Shades." A sarcastic song about the march of progress, the singer gushes about how exciting the future will be while putting on dark glasses meant to shield them from going blind thanks to the threat of nuclear explosions. That mix of excitement and dread sums up the conflicting feelings many people feel about artificial intelligence: a technology that could both radically revolutionize how we live and work while also potentially rendering many of us obsolete. Whether or not those fears come true remains to be seen, but it’s one of many reasons to approach conversations about this technology from an ethical standpoint.

One of Rio Salado’s very own Library staff members, Instructional Coordinator Kirstin Thomas, recently co-authored a book with Joe Danielewicz on the importance of using AI in an ethical way. Titled

The Ethos of Artificial Intelligence: Using AI to Promote the Good Life While Avoiding Harm,” this slim but thought-provoking text packs a lot of insightful information and philosophical explorations in 183 pages.

“The Ethos of Artificial Intelligence” asks readers to consider what they’re getting out of the technology and challenges them to use it with intention. One of the big concerns Thomas and Danielewicz have about AI is the risk of human intelligence becoming ‘disembodied.’

“By increasingly outsourcing learning, creating, and critical thinking tasks to machines, we run the risk of losing something very precious: embodied intelligence,” Thomas explained. “Living in the world used to require a lot more embodied intelligence than it does now. Take navigation, for instance. Before Google Maps, individuals were required to be much more grounded, locationally, in space and time. They had to pay closer attention to the external environment. They had to synthesize where they had been in relation to where they stood and where they were going. If you have ever marveled at how people got along before Google Maps, then you can recognize how much we already rely on technology to assist us in daily life. This is one example of how intelligence is becoming 'disembodied.' I worry that if we increasingly forfeit opportunities for embodied learning, our very capacity to think, learn, and create in an autonomous fashion may slowly begin to atrophy.”

Both authors approach the subject from a fresh perspective. While Danielewicz has years of experience working as a technical writer and Thomas likewise is a seasoned educator and researcher, neither of them had much hands-on experience before they decided to explore this topic. Part of what makes The “Ethos of Artificial Intelligence” so engaging is that feeling of being on a voyage of discovery WITH the authors; you can see them finding and refining their arguments, page by page, as they become more comfortable with the topic at hand.

"I was reading an article that Henry Kissinger was turning a hundred years old and he was writing a book on AI," said Danielewicz. "He knew nothing about AI so if he could do it I felt like my experiences with data architecture and databases made me capable of doing this."

Thomas admits to feeling wary about the technology when she first started exploring the topic. 

"There was this major fatigue after COVID," Thomas said. "It was like 'here's another new thing to contend with.' But I knew I needed to educate myself, so I started playing around with ChatGPT and used ‘The Screwtape Letters’ as a prompt. I was so impressed with what it produced that I came away thinking, 'Wow, this is going to be an earth-shattering thing when it's fully developed.’"

Both authors see the potential in the technology as an aid to learning and creativity when it's used thoughtfully and ethically, but they also have concerns about the ways that it's already being abused. 

"We're already seeing with OpenAI that people are developing these surrogate relationships with their voice assistant," Thomas said. "They're always on your side, always pumping you up with what you want to hear, and that can obviously be problematic… What might that mean for us as human beings when this kind of technology might interfere with people being able to foster healthy relationships in the real world?”

"I'm hoping that our book isn't a downer," Danielewicz said, pointing out that the book also explores the ways in which AI has been a boon to science (particularly in the areas of protein folding and digital twinning in manufacturing). "We try to tell both sides of it because there are a lot of good things.” 

But Danielewicz and Thomas are also quick to point out how AI has been used in the area of surveillance, and how the industry's manic pace to push AI out into the world runs the risk of them ignoring best safety practices or considering the long-term ramifications of this technology.

“There was this Pause AI letter that was signed by a lot of people who were researching and specializing in AI, and they were saying that we need to put the brakes on this," Thomas said. "It's almost analogous to Frankenstein. He had smart people telling him 'this isn't a good idea, don't do this' but he had this pathological need to do the thing. It's really reminscient to how some of the big leaders in AI have been acting."

“The Ethos of Artificial Intelligence” ends on a cautiously optimistic note: this technology is unpredictable, the future is uncertain, but one thing we can control is our character. If we can use this technology responsibly and with restraint, we could learn to master it without it mastering us. You’re less likely to injure yourself with a tool when you use it carefully.

“As an educator, I want students to think critically about their use of Generative AI tools, particularly about their intentions,” Thomas said. “I truly believe that the best way to stay in right-relation to these powerful tools is to ask the simple question, ‘Why am I using AI?’ The answer to that question will tell us everything we need to know about whether we are using these tools skillfully or unskillfully.”

 

Article by Austin Brietta

 

Learn More About AI

In addition to co-writing this insightful text on AI and ethics, Thomas has also put together an informative primer on AI for the Rio library. Titled “Framing the AI Puzzle,” this resource breaks down some of the themes and concerns found in Ethos while also pointing readers to additional websites, articles, institutions, and podcasts that can broaden their knowledge on this complex set of issues.

Framing the AI Puzzle: A Relational Approach to Understanding Artificial Intelligence