When used ethically, responsibly, and with proper guardrails in place, Artificial Intelligence (A.I.) is a powerful technology that increases creativity, productivity and efficiency in the workplace. I need not describe or add color commentary to what the alternative approach to using A.I. looks like, as "The Terminator's" Sarah Connor has done that ever so well.
Like many other emerging technologies were at their own beginning, A.I. is today the "villain du jour". Humans fear what they do not know, understand, or can't control. Because of its nascent stage, A.I. technology is being used in negative contexts online, in college faculty rooms, artistic union negotiations, legislative groups, and even the halls of Congress. If your memory is anything like mine, replace "A.I." with "NFTs" or "Blockchain" and you will have identified the "then current" villain of two years ago.
If hindsight is 20/20 and history a great teacher, then there is one thing we have learned well, and are likely to understand as time passes by, and that is the fact that, the villain is not A.I., but those who refuse to use it ethically, responsibly, and observe laws and guardrails; if any are in place.
A.I. is truly powerful technology. We've seen it a work in social media deep-fake edited video and images. We've seen it at work in edited conference call recordings. Scarier yet, we've heard it in flawless renditions and song covers released in the 2020's, performed by A.I. that sounds just like Freddy Mercury did back in the day.
If we let A.I be used to propagate falsehoods and "deep fakes", steal copyright materials or plagiarize intellectual property, create compromising or "revenge" materials to soil someone's reputation, or allow credit or likeness to be stolen from an actor, author, composer, creator; we become part of the problem and as guilty as the villain.
At the United States Naval Academy I learned on day one, that "rank has its responsibilities." You may have heard Spiderman's variation - "with great power, comes great responsibility."
"With great power comes great responsibility." Uncle Ben Parker (Stan Lee)
As I type this blog, A.I. continues its inevitable expansion, becoming an intrinsic and nearly invisible part of our everyday life and work. As a growing technology industry, we can pretend A.I. is the indomitable "Wild West" and that it's someone else's problem to regulate; or we can lean forward and help define the ethics, standards, boundaries; what will and won't be tolerated within this rapidly evolving A.I arena.
I chose the latter, I chose to lead. I look forward to collaborating with other A.I. leaders.
As honest and upstanding professionals, why would we ever intentionally use A.I., unethically?
How do we "watermark" and declare within our work that we used A.I. to create a product?
How do we give credit, recognition, and even residuals, to those whose work we used?
Let's use A.I. to and at its maximum potential; and let's do so as the Hippocratic Oath says -
Do no harm. To that end, here are three thoughts for all business leaders to consider:
1. Lead your employees and corporations to use A.I. ethically.
Let's use A.I. for good. To help an artist find inspiration, or a teacher prepare a better lesson for their students, to cure cancer and other diseases faster, or to save the planet in which we live in.
It's amazing what ChatGPT or OpenAI can do. I lead Fuel For Thought and Artistic Fuel, and many other companies like mine are using A.I. with a one intention in mind - helping their customers be a better professional, artist, leader, person.
Let's not use A.I. to discard the human workforce. Let's use it to make the workforce we have a better and more productive version of itself.
2. Use A.I. responsibly, showing were you used it.
Remember that High School English class and the teacher who took away ten points in your research paper because you forgot to list your sources, the dreaded "bibliography"? Yeah - and you thought you'd never use that lesson.
When we use A.I. and don't declare its sources of information, we are wantonly damaging the craft of professionals in many disciplines like writers, painters, engineers, actors, teachers et al. We need to create standards for labeling images, videos, written works, that leveraged A.I., and list the source and the use of A.I. in the bibliography of your work its use.
Let's use A.I. to augment and complement the work product of our professionals, and help them compete in a leveled and fair playing field.
3. Let's regulate A.I. before malfeasance, ignorance, and fear come for it.
Ignorance is a curse only cured by well learned facts and information. The more we actively collaborate to legislate and regulate the use of A.I. the better of we'll be. If you've ever watched CSPAN or have a moment to search in YouTube, you will come across a true circus across the ages, featuring technologies that were feared to bring the end of civilization, that have ended up saving millions of lives - when used ethically and responsibly.
As with all new technologies, and this is one of them, business and government need to collaborate and agree to apply laws and write new ones, to "check the power of the genie in the bottle". When I hear the pushback arguments, "regulations are unnecessary", or "we can self regulate" all I hear is someone wants to cheat and get away with it.
Let's proactively build a framework of laws to act as uniform guardrails, to make sure we get the most out of A.I. and unleash its potential to advance us into the a new technological renaissance and prevent the unethical and dangerous uses.
In the end, if we fail to unleash the power of A.I. to do all the good it can do, ethically, responsibly, and in accordance with smart laws, we will look back in time with regret, and we would have failed to understand the prophetic words Sarah Connor carved in the desert - "No fate"
A.I. is not a villain.
A.I. is what, of it, we make.