![]() As a PR and communications professional in the tech and entertainment space, and an author, musician, and artist, I know the value of real humans, working together to create new things or solve complicated problems. There is a spark that happens when a focused group gets together to make something new or find solutions to seemingly impossible situations. There’s almost a magic to it and that excitement has a way of building on itself and encouraging others. I also feel there are a number of reasons that this human-driven process is greatly undermined by the unchecked use of generative AI in areas such as enterprise, education, physical health, and mental health. While many seem to be of a fixed mindset that the AI “genie is out of the bottle” and we are all at the whim of whatever happens, I beg to differ, and point out that that is a lie that serves those in power, who have probably invested heavily in the mostly unregulated AI industry as it stands today. I believe we, as humans, have agency over how we proceed in this brave new world, and while no set of governing principles will ever be used completely, (completeness itself is a myth), I believe that, by speaking these ideas into existence, we can begin to make sense of our reality and make a more intentional, mindful future in which humans benefit from the potential of what AI can do, and minimize the fallout of having machines replace the arts and the very things that humans actually enjoy doing. It is my hope that we consider and work to implement the following framework, and similar frameworks of its kind. Part One: The Shoulds and Should Nots of AI
AI Should Not Be:
When We Do Use AI, It Should Be:
Part Two: AI Use Case and Implementation (what we create and use AI for ) What AI Should Do: AI should help humans without replacing or undermining them. When AI Should Be Used: AI should primarily be used to do things that humans can’t do due to limiting factors such as: 1. the scope of data to be analyzed 2. the need for specific, highly accurate quantifiable answers, calculated by performing tedious, monotonous, mundane tasks 3. the complexities of data systems which require measurable, highly accurate, nuanced answers 4. constraints on time that would render a human’s eventual efforts irrelevant 5. the need to scale with regard to helping people in crisis or with compounding problems 6. other measurable compounding factors that limit outcome effectiveness Examples of the Above Include: 1. When The Scope of Data is Beyond Human Ability to Calculate AI should be used to do the big things that humans can’t do well, such as back solving for new vaccines by employing pattern recognition across vast amounts of data to produce measurable outcomes. Another use-case scenario in this category is the use of AI to analyze genetic data and environmental data to help determine the likelihood of diseases such as cancer or Alzheimer’s. 2. When the Job Requires Specific Accurate Quantifiable Answers Calculated by Performing Tedious, Mundane, Tasks Doing taxes is a task that falls into this category. Most people don’t like doing taxes and a high degree of accuracy is required across multiple data sets that would lend this task reasonably to the assistance of AI. Quantifiable answers in this case refer to answers that lie more in analytical and statistical domains, and not overtly creative ones. 3. When the Complexities of the Data Systems Require Measurable, Accurate, Nuanced Answers Some systems are exceptionally complex and difficult to navigate, such as the American healthcare system and insurance systems. The use of AI can help level the playing field for individuals navigating these systems, and help people receive benefits from these systems that they are entitled to. 4. When Time Constraints Are Imposed and Accurate Answers or Options Are Critical Sometimes the clock is ticking when accurate results are critical to making decisions and delivering effective answers and options. Emergency response to natural disasters are one such use-case for AI that falls into this category. 5. When Helping People At-Scale is Required During Crisis Sometimes emergencies or mental health crisis can escalate, especially during a disaster or event that impacts underrepresented communities. In this case, specifically trained AI might be developed and used to help people find stability, safety, and comfort for their personal safety and mental health until such time as a trained human therapist or counselor is able and available to work with them. 6. When Other Measurable Compounding Factors Limit Outcome Effectiveness at The Expense of Human Life or Environmental Catastrophe Urgent or emergency scenarios that require measurable, specific outcomes or solutions merit this kind of AI engagement. Part Three: Organization, Creation, and Cultivation of Data 1. AI Should Reside in Intentionally Cultivated Walled Gardens Humans have tended food forests and plants stretching back into pre-ancient past, creating plant pairings that were mutually beneficial and long-lasting. The way we think about AI and use it can be similar to this process, but it has to be strategic. You wouldn’t expect your average dentist to know much about child psychology or rug weaving, so why would you trust a general AI for detailed expertise on something it may not even be trained in? Especially when it doesn’t tell you it has little to no domain knowledge in this space? Let’s focus AI on specific domains or areas of interest. Such walled-off-gardens can help users have a higher degree of confidence in the answers they receive. 2. Consensually Sourced and Disclosed Training Material When AI answers questions and solves problems, the results it produces are often based on stolen works, and you’ll usually have no way of knowing this. The libraries used to train generative AI contain works by authors, artists, musicians, and filmmakers who, unless clearly stated, did not consent to their work being used. Unless clearly stated, these artists are not credited or compensated. We should consider these works intellectual property and require consent before using these works in AI training libraries. 3. Entitlement to Compensation for The Use of Creative Works in Training Data When AI companies train their algorithms on intellectual property that is privately owned, and not in the public domain, the author/owner of the work(s) must be financially compensated in the form of royalties paid to them, or, if they are deceased, to their estate or other such organization. 4. Public, On-Product Disclosure When Creative Works Are Generated by or Heavily Influenced by AI When AI is used to create or shape the direction of creative content such as images, music, video, pictures, story, scripts, and other media, evident public disclosure must be made on the public-facing content, which informs audiences of this. In the event that said AI-based work heavily resembles the works of specific human artists, authors, creators, etc., those people must be credited. In the event that said creators are alive, they must be financially compensated in the form of royalties for the use of their work. 5. Ethical Sale of Data Sets The sale of data sets should be well regulated and favor the rights of the individual. Data sets sold should be scrubbed of any personal identifiers in a way that protects the anonymity of the individual. While an individual may have legally agreed to give their personal data to one company, that does not give said company the right to package and re-sell said personal data, or for that data to be acquired or absorbed by another company or party, without said personal data being scrubbed to protect the individual. 6. Protection of Individual’s Data from Third Parties One’s personal data, when used or stored in AI-related data sets, may not be given, sold, or disclosed to third parties without the express consent of the individual. Third parties include other companies, organizations, governments, schools, oversight committees, private parties, etc. Part Four: Warnings for Those Using AI 1. The Hype About AI is Well-Funded There's a huge amount of investment being poured into generative AI right now as well as an entire public-facing PR machine, running at full-steam, to make generative AI look like the safe bet and sure thing that investors want you to believe it is. This is another gold rush. This is another bubble. Don't trust it blindly. 2. There’s An Environmental Cost AI requires vast resources to function, both in what it takes to build and train algorithms, and also in its carbon footprint based on the physical resources AI uses. We should be mindful of this and use AI responsibly. 3. Generative AI Isn’t as Smart as People Think AI doesn’t really create anything new; it’s mixing and matching. While there are many times that can be a useful thing, (as outlined above), it is not a replacement for human innovation and should not be thought of as such. 4. There is a Ceiling for Generative AI The Generative AI that exists today has already been trained on the existing libraries of human-made art, writing, music, movies, and media. The input it uses to mix, match, and pair is limited until humans create more work. In many ways, this is as good as it gets. The Bottom Line We shouldn’t let machines take away the experiences that bring joy to us and make life worth living. We also shouldn’t let unregulated for-profit, or government entities use AI in ways that hurt the individual, extract creativity without due compensation, or limit personal freedoms. With sensible regulations and applied ethics, backed by appropriate laws and guidelines, led by an informed public, it is possible to work with AI in ways that bring out the best of us, without replacing or undermining the very humans it seeks to help. To all my fellow creatives out there; stay human! Comments are closed.
|
About the AuthorJennifer is a storyteller who connects big ideas with audiences. She specializes in public relations, brand development, and creative services for startups, theme parks, musicians, authors, nonprofits, and more. Archives
August 2025
|