Generative AI ChatGPT Is Going To Be Everywhere Once The API Portal Gets Soon Opened, Stupefying AI Ethics And AI Law

Release the Kraken!

You are undoubtedly familiar with that famous catchphrase as especially uttered by actor Liam Neeson in The Clash of the Titans when he commands that the legendary sea monster be released, aiming to wreak immense havoc and outsized destruction. The line has been endlessly repeated and spawned all manner of memes. Despite the various parodies, most people still at least viscerally sense that the remark foretells something of a shadowy and dangerous emergence is about to be unleashed.

Perhaps the same sentiment can be applied these days to Artificial Intelligence (AI).

Allow me to elaborate.

A recent announcement indicated that a now resoundingly famous AI app called ChatGPT made by the organization OpenAI is soon going to be made available for access by other programs. This is big news. I say this even though little of the regular media has picked up on the pronouncement. Other than fleeting mentions, the full impact of this upcoming access is going to be pretty darned significant.

In today’s column, I’ll be explaining why this is the case. You can prepare yourself accordingly.

Some adamantly believe that this will be akin to letting loose the Kraken, namely that all kinds of bad things are going to arise. Others see this as making available a crucial resource that can boost tons of other apps by leveraging the grand capabilities of ChatGPT. It is either the worst of times or the best of times. We will herein consider both sides of the debate and you can decide for yourself which camp you land in.

Into all of this comes a slew of AI Ethics and AI Law considerations.

Please be aware that there are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.

There have been growing qualms that ChatGPT and other similar AI apps have an ugly underbelly that maybe we aren’t ready to handle. For example, you might have heard that students in schools are potentially able to cheat when it comes to writing assigned essays via using ChatGPT. The AI does all the writing for them. Meanwhile, the student is able to seemingly scot-free turn in the essay as though they did the writing from their own noggin. Not what we presumably want AI to do for humankind.

A few key essentials might be helpful to set the stage for what this is all about.

ChatGPT is a type of AI commonly referred to as Generative AI. These trending generative-based AI apps allow you to enter a brief prompt and have the app generate outputs for you. In the case of ChatGPT, the output is text. Thus, you enter a text prompt and the ChatGPT app produces text for you. I tend to describe this as a particular subtype of Generative AI that is honed to generate text-to-essay outputs (there are other subtypes such as text-to-images, text-to-video, and so on).

The AI maker of ChatGPT has indicated that soon an API (Application Programming Interface) will be made available for the AI app. In short, an API is a means of allowing other programs to go ahead and use a program that makes available a portal into the given application. This means that just about any other program on this planet can potentially leverage the use of ChatGPT (well, as licensed and upon approval by the AI maker of ChatGPT, as will be further discussed momentarily herein).

The upshot is that the use and uses of ChatGPT could potentially shoot through the roof.

Whereas today there is an impressive number of signups entailing people who on an individual basis can use ChatGPT, capped by the AI maker at a million users, this is actually going to likely be a drop in the bucket of what is about to come.

Realize that those existing million signups consist of some portion that used ChatGPT on a one-time frolic and then after the thrill wore off, they haven’t used it since. Many were presumably attracted to the AI app as a social media viral reactive response. In short, if everyone else was wanting to use it, they wanted to do so too. Upon some initial experimentation with the generative-based AI, they felt satisfied that they had averted their FOMO (fear of missing out).

To make it stridently clear, I am not suggesting that people aren’t using ChatGPT. They are. Those that signed up are increasingly finding that the AI app is overloaded. Lots and lots of people are using the app. You get a few cleverly composed sorrowful indications from time to time that the system is busy and you should try back later on. Word on the street is that the existing infrastructure for ChatGPT has been straining to cope with the avid fans using the AI app.

And though having a million potential users is nothing to sneeze at, the number is likely going to readily be eclipsed multifold once the API is made available. Developers of other programs that today have nothing to do with generative AI are going to want to tap into the generative AI bandwagon. They are going to want to connect their program with ChatGPT. Their heart of hearts hopes is that this will boost their existing program into the stratosphere of popularity.

Think of it this way. Assume that all manner of software companies that make programs today that reach many millions of users, often reaching into the tens and hundreds of millions of users all told, opt to pair up their respective programs with ChatGPT. This suggests that the volume of users that are using ChatGPT could go sky-high.

The Kraken is released.

Why would various software companies want to pair up with ChatGPT, you might be wondering?

A straightforward answer is that they might as well exploit the amazing tailwinds that are pushing ChatGPT onward and upward. Some will do so for sensible and aboveboard reasons. Others will do so merely to try and gain their own semblance of fifteen minutes of fame.

I like to stratify the pairings to ChatGPT as consisting of two major intentions:

  • Genuine Pairing With ChatGPT
  • Fakery Pairing With ChatGPT

In the first case, the notion is that there is a bona fide basis for pairing up with ChatGPT. The makers of a given program are able to well articulate the tangible and functional benefits that will arise due to a pairing of their program with ChatGPT. We can all in a reasonable frame of mind see that the pairing is a match made in heaven.

For the other case, consisting of what I denote as fakery, some will seek to pair up with ChatGPT on a flighty or shaky basis. The business case does not consist of anything especially substantive. The pairing is a desperate attempt to ride the tails of ChatGPT. Any reasonable inspection would reveal that the pairing is marginal in value. Now then, whether you think that this is a proper or improper form of pairing is somewhat hanging in the air. One could try to argue that a particular pairing with ChatGPT, even if the pairing doesn’t accomplish anything other than boost usage and has no other functional additive value, presumably is pairing still worthy of undertaking.

A bit of a downside will be those that falsely portray the pairing and lead people to believe that something notable is occurring when it really is not. We can certainly expect that some will try this. Those in AI Ethics are stridently concerned about snake oil uses that are going to come out of the woodwork. There is a chance too that if this gets out of hand, we might see new AI-related laws that will be spurred into being drafted and enacted.

Let’s take a closer exploration of what constitutes genuine pairings and what also constitutes fakery pairings.

First, we ought to make sure that we are all on the same page about what Generative AI consists of and also what ChatGPT is all about. Once we cover that foundational facet, we can perform a cogent assessment of how the API into ChatGPT is going to radically change things.

If you are already abundantly familiar with Generative AI and ChatGPT, you can perhaps skim the next section and proceed with the section that follows it. I believe that everyone else will find instructive the vital details about these matters by closely reading the section and getting up-to-speed.

A Quick Primer About Generative AI And ChatGPT

ChatGPT is a general-purpose AI interactive conversational-oriented system, essentially a seemingly innocuous general chatbot, nonetheless, it is actively and avidly being used by people in ways that are catching many entirely off-guard, as I’ll elaborate shortly. This AI app leverages a technique and technology in the AI realm that is often referred to as Generative AI. The AI generates outputs such as text, which is what ChatGPT does. Other generative-based AI apps produce images such as pictures or artwork, while others generate audio files or videos.

I’ll focus on the text-based generative AI apps in this discussion since that’s what ChatGPT does.

Generative AI apps are exceedingly easy to use.

All you need to do is enter a prompt and the AI app will generate for you an essay that attempts to respond to your prompt. The composed text will seem as though the essay was written by the human hand and mind. If you were to enter a prompt that said “Tell me about Abraham Lincoln” the generative AI will provide you with an essay about Lincoln. This is commonly classified as generative AI that performs text-to-text or some prefer to call it text-to-essay output. As mentioned, there are other modes of generative AI, such as text-to-art and text-to-video.

Your first thought might be that this generative capability does not seem like such a big deal in terms of producing essays. You can easily do an online search of the Internet and readily find tons and tons of essays about President Lincoln. The kicker in the case of generative AI is that the generated essay is relatively unique and provides an original composition rather than a copycat. If you were to try and find the AI-produced essay online someplace, you would be unlikely to discover it.

Generative AI is pre-trained and makes use of a complex mathematical and computational formulation that has been set up by examining patterns in written words and stories across the web. As a result of examining thousands and millions of written passages, the AI can spew out new essays and stories that are a mishmash of what was found. By adding in various probabilistic functionality, the resulting text is pretty much unique in comparison to what has been used in the training set.

That’s why there has been an uproar about students being able to cheat when writing essays outside of the classroom. A teacher cannot merely take the essay that deceitful students assert is their own writing and seek to find out whether it was copied from some other online source. Overall, there won’t be any definitive preexisting essay online that fits the AI-generated essay. All told, the teacher will have to begrudgingly accept that the student wrote the essay as an original piece of work.

There are additional concerns about generative AI.

One crucial downside is that the essays produced by a generative-based AI app can have various falsehoods embedded, including patently untrue facts, facts that are misleadingly portrayed, and apparent facts that are entirely fabricated. Those fabricated aspects are often referred to as a form of AI hallucinations, a catchphrase that I disfavor but lamentedly seems to be gaining popular traction anyway (for my detailed explanation about why this is lousy and unsuitable terminology, see my coverage at the link here).

I’d like to clarify one important aspect before we get into the thick of things on this topic.

There have been some zany outsized claims on social media about Generative AI asserting that this latest version of AI is in fact sentient AI (nope, they are wrong!). Those in AI Ethics and AI Law are notably worried about this burgeoning trend of outstretched claims. You might politely say that some people are overstating what today’s AI can actually do. They assume that AI has capabilities that we haven’t yet been able to achieve. That’s unfortunate. Worse still, they can allow themselves and others to get into dire situations because of an assumption that the AI will be sentient or human-like in being able to take action.

Do not anthropomorphize AI.

Doing so will get you caught in a sticky and dour reliance trap of expecting the AI to do things it is unable to perform. With that being said, the latest in generative AI is relatively impressive for what it can do. Be aware though that there are significant limitations that you ought to continually keep in mind when using any generative AI app.

If you are interested in the rapidly expanding commotion about ChatGPT and Generative AI all told, I’ve been doing a focused series in my column that you might find informative. Here’s a glance in case any of these topics catch your fancy:

  • 1) Predictions Of Generative AI Advances Coming. If you want to know what is likely to unfold about AI throughout 2023 and beyond, including upcoming advances in generative AI and ChatGPT, you’ll want to read my comprehensive list of 2023 predictions at the link here.
  • 2) Generative AI and Mental Health Advice. I opted to review how generative AI and ChatGPT are being used for mental health advice, a troublesome trend, per my focused analysis at the link here.
  • 3) Fundamentals Of Generative AI And ChatGPT. This piece explores the key elements of how generative AI works and in particular delves into the ChatGPT app, including an analysis of the buzz and fanfare, at the link here.
  • 4) Tension Between Teachers And Students Over Generative AI And ChatGPT. Here are the ways that students will deviously use generative AI and ChatGPT. In addition, here are ways for teachers to contend with this tidal wave. See the link here.
  • 5) Context And Generative AI Use. I also did a seasonally flavored tongue-in-cheek examination about a Santa-related context involving ChatGPT and generative AI at the link here.
  • 6) Scammers Using Generative AI. On an ominous note, some scammers have figured out how to use generative AI and ChatGPT to do wrongdoing, including generating scam emails and even producing programming code for malware, see my analysis at the link here.
  • 7) Rookie Mistakes Using Generative AI. Many people are both overshooting and surprisingly undershooting what generative AI and ChatGPT can do, so I looked especially at the undershooting that AI rookies tend to make, see the discussion at the link here.
  • 8) Coping With Generative AI Prompts And AI Hallucinations. I describe a leading-edge approach to using AI add-ons to deal with the various issues associated with trying to enter suitable prompts into generative AI, plus there are additional AI add-ons for detecting so-called AI hallucinated outputs and falsehoods, as covered at the link here.
  • 9) Debunking Bonehead Claims About Detecting Generative AI-Produced Essays. There is a misguided gold rush of AI apps that proclaim to be able to ascertain whether any given essay was human-produced versus AI-generated. Overall, this is misleading and in some cases, a boneheaded and untenable claim, see my coverage at the link here.
  • 10) Role-Playing Via Generative AI Might Portend Mental Health Drawbacks. Some are using generative AI such as ChatGPT to do role-playing, whereby the AI app responds to a human as though existing in a fantasy world or other made-up setting. This could have mental health repercussions, see the link here.
  • 11) Exposing The Range Of Outputted Errors and Falsehoods. Various collected lists are being put together to try and showcase the nature of ChatGPT-produced errors and falsehoods. Some believe this is essential, while others say that the exercise is futile, see my analysis at the link here.
  • 12) Schools Banning Generative AI ChatGPT Are Missing The Boat. You might know that various schools such as the New York City (NYC) Department of Education have declared a ban on the use of ChatGPT on their network and associated devices. Though this might seem a helpful precaution, it won’t move the needle and sadly entirely misses the boat, see my coverage at the link here.

You might find of interest that ChatGPT is based on a version of a predecessor AI app known as GPT-3. ChatGPT is considered to be a slightly next step, referred to as GPT-3.5. It is anticipated that GPT-4 will likely be released in the Spring of 2023. Presumably, GPT-4 is going to be an impressive step forward in terms of being able to produce seemingly even more fluent essays, going deeper, and being an awe-inspiring marvel as to the compositions that it can produce.

You can expect to see a new round of expressed wonderment when springtime comes along and the latest in generative AI is released.

I bring this up because there is another angle to keep in mind, consisting of a potential Achilles heel to these better and bigger generative AI apps. If any AI vendor makes available a generative AI app that frothily spews out foulness, this could dash the hopes of those AI makers. A societal spillover can cause all generative AI to get a serious black eye. People will undoubtedly get quite upset at foul outputs, which have happened many times already and led to boisterous societal condemnation backlashes toward AI.

One final forewarning for now.

Whatever you see or read in a generative AI response that seems to be conveyed as purely factual (dates, places, people, etc.), make sure to remain skeptical and be willing to double-check what you see.

Yes, dates can be concocted, places can be made up, and elements that we usually expect to be above reproach are all subject to suspicions. Do not believe what you read and keep a skeptical eye when examining any generative AI essays or outputs. If a generative AI app tells you that Abraham Lincoln flew around the country in his own private jet, you would undoubtedly know that this is malarky. Unfortunately, some people might not realize that jets weren’t around in his day, or they might know but fail to notice that the essay makes this brazen and outrageously false claim.

A strong dose of healthy skepticism and a persistent mindset of disbelief will be your best asset when using generative AI.

We are ready to move into the next stage of this elucidation.

Unleashing The Beast

Now that we’ve got the fundamentals established, we can dive into the business-oriented and societal repercussions due to the ChatGPT API aspects.

An announcement was recently made by Microsoft in conjunction with OpenAI about the upcoming availability of ChatGPT on the Azure cloud platform of Microsoft (per online posting entitled “General Availability of Azure OpenAI Service Expands Access to Large, Advanced AI Models with Added Enterprise Benefits”, January 16, 2023):

  • “Large language models are quickly becoming an essential platform for people to innovate, apply AI to solve big problems, and imagine what’s possible. Today, we are excited to announce the general availability of Azure OpenAI Service as part of Microsoft’s continued commitment to democratizing AI, and ongoing partnership with OpenAI. With Azure OpenAI Service now generally available, more businesses can apply for access to the most advanced AI models in the world—including GPT-3.5, Codex, and DALL•E 2—backed by the trusted enterprise-grade capabilities and AI-optimized infrastructure of Microsoft Azure, to create cutting-edge applications. Customers will also be able to access ChatGPT—a fine-tuned version of GPT-3.5 that has been trained and runs inference on Azure AI infrastructure—through Azure OpenAI Service soon.”

You might have noticed in that statement that other various AI apps that have been devised by OpenAI will also be available. Indeed, some of those AI apps have already been accessible for quite a while, as mentioned further in the recent above pronouncement: “We debuted Azure OpenAI Service in November 2021 to enable customers to tap into the power of large-scale generative AI models with the enterprise promises customers have come to expect from our Azure cloud and computing infrastructure—security, reliability, compliance, data privacy, and built-in Responsible AI capabilities” (ibid).

I earlier mentioned that both AI Ethics and AI Law are trying to balance the AI For Good aspirations with the potential AI For Bad that can at times arise. Within the AI realm, there is a movement afoot toward having Responsible AI or sometimes coined as Trustworthy AI or Human-Centered AI, see my coverage at the link here. All AI makers are urged to devise and field their AI toward AI For Good and seek overtly to curtail or mitigate any AI For Bad that might emerge.

This is a tall order.

In any case, the aforementioned pronouncement did address the Responsible AI considerations:

  • “As an industry leader, we recognize that any innovation in AI must be done responsibly. This becomes even more important with powerful, new technologies like generative models. We have taken an iterative approach to large models, working closely with our partner OpenAI and our customers to carefully assess use cases, learn, and address potential risks. Additionally, we’ve implemented our own guardrails for Azure OpenAI Service that align with our Responsible AI principles. As part of our Limited Access Framework, developers are required to apply for access, describing their intended use case or application before they are given access to the service. Content filters uniquely designed to catch abusive, hateful, and offensive content constantly monitor the input provided to the service as well as the generated content. In the event of a confirmed policy violation, we may ask the developer to take immediate action to prevent further abuse” (ibid).

The crux of that Responsible AI perspective is that by requiring a formal request to access ChatGPT on a program API basis, there is a chance of weeding out the unsavory submissions. If there is suitable due diligence in choosing which other firms and their programs can access ChatGPT, perhaps there is a fighting chance of preventing the full wrath of a released Kraken.

Maybe yes, maybe not.

Some pundits are wringing their hands that the money-making possibilities of allowing the ChatGPT API to be put into use will strain the counterbalancing notion of wanting to keep the beast reasonably and safely contained. Will the scrutiny really be sufficiently careful upfront? Might we see instead that a loosey-goosey-wobbly approval process occurs as the volume of requests gets out of hand? Some are fearful that only once the cat is out of the bag might a belated fuller scrutiny truly occur, though by then the damage will already have been done.

Well, you can at least give due credit that there is a vetting process involved. There are some generative AI apps that either lack a coherent vetting process or that are of a sketchy cursory nature. In addition, there are open-source versions of generative AI that generally can be used by nearly anyone that wants to do so, albeit some modicum of licensing restrictions are supposed to be followed (trying to enforce this is harder than it might seem).

Let’s take a quick look at the existing rules regarding limiting access to the Azure OpenAI service to see what other software makers will need to do to potentially connect up with ChatGPT. Per the online posted Microsoft Policies (latest posting indicated as December 14, 2022):

  • “As part of Microsoft’s commitment to responsible AI, we are designing and releasing Azure OpenAI Service with the intention of protecting the rights of individuals and society and fostering transparent human-computer interaction. For this reason, we currently limit the access and use of Azure OpenAI, including limiting access to the ability to modify content filters and modify abuse monitoring. Azure OpenAI requires registration and is currently only available to managed customers and partners working with Microsoft account teams. Customers who wish to use Azure OpenAI are required to submit a registration form both for initial access for experimentation and for approval to move from experimentation to production.”
  • “For experimentation, customers attest to using the service only for the intended uses submitted at the time of registration and commit to incorporating human oversight, strong technical limits on inputs and outputs, feedback channels, and thorough testing. For production, customers explain how these have been implemented to mitigate risk. Customers who wish to modify content filters and modify abuse monitoring after they have onboarded to the service are subject to additional scenario restrictions and are required to register here.”
  • “Access to the Azure OpenAI Service is subject to Microsoft’s sole discretion based on eligibility criteria and a vetting process and customers must acknowledge that they have reviewed and agree to the Azure terms of service for Azure OpenAI Service. Microsoft may require customers to re-verify this information. Azure OpenAI Service is made available to customers under the terms governing their subscription to Microsoft Azure Services, including the Azure OpenAI section of the Microsoft Product Terms. Please review these terms carefully as they contain important conditions and obligations governing your use of Azure OpenAI Service.”

That is on the Microsoft side of things.

OpenAI also has its usage policies associated with its API:

  • “We want everyone to be able to use our API safely and responsibly. To that end, we’ve created use-case and content policies. By following them, you’ll help us make sure that our technology is used for good. If we discover that your product doesn’t follow these policies, we’ll ask you to make necessary changes. If you don’t comply, we may take further action, including terminating your account.”
  • “We prohibit building products that target the following use-cases:”
  • “— Illegal or harmful industries”
  • “— Misuse of personal data”
  • “— Promoting dishonesty”
  • “— Deceiving or manipulating users”
  • “— Trying to influence politics”
  • “The following set of use cases carry a greater risk of potential harm: criminal justice, law enforcement, legal, government and civil services, healthcare, therapy, wellness, coaching, finance, news. For these use-cases, you must:”
  • “1) Thoroughly test our models for accuracy in your use case and be transparent with your users about limitations”
  • “2) Ensure your team has domain expertise and understands/follows relevant laws”
  • “We also don’t allow you or end-users of your application to generate the following types of content:”
  • “— Hate”
  • “— Harassment”
  • “— Violence”
  • “— Self-harm”
  • “— Sexual”
  • “— Political”
  • “— Spam”
  • “— Deception”
  • “— Malware”

A big question will be whether these ideals can be observed if there is a fervent rush of requests to connect with ChatGPT. Perhaps there will be an overwhelming tsunami of requests. The human labor to examine and carefully vet each one could be costly and difficult to manage. Will the desire to be suitably restrictive get watered down, inadvertently in the face of the immense demand for access?

As the famed witticism goes, the best of plans can sometimes be set asunder upon first contact with abundant forces.

There is also a lot of leeway in how to interpret the stated rules. As we have seen in general about the rise of disinformation and misinformation, trying to separate the wheat from the chaff can be quite challenging. How is one to determine whether generated content abides by or violates the provisions of not being hateful, political, deceptive, and the like?

A looming difficulty could be that if the ChatGPT API is made available to a software maker that is pairing up their program with ChatGPT, and the resulting output unambiguously violates the stated precepts, will the horse already be out of the barn? Some suggest that there is a strong possibility of reputational harm that can be incurred to all parties involved. Whether this can be overcome by simply disengaging the API to that particular offender is unclear. The damage, in a sense, might linger and spoil the barrel all told. Plenty of blame will be spirited to all comers.

Stratifying The API Pairings

I noted earlier that the pairings to ChatGPT can be conveniently grouped into two major intentions:

  • Genuine Pairing With ChatGPT
  • Fakery Pairing With ChatGPT

Let’s examine first the genuine or bona fide pairings.

As background, the way that this occurs is somewhat straightforward. The ChatGPT app allows other programs to invoke the app. Typically, this would consist of say a program we’ll call Widget that passes to ChatGPT a prompt in text format, and then after ChatGPT does its thing, an essay or text is returned to the program Widget. This is almost like a person doing the same thing, though we will have a program do those actions in lieu of a person doing so.

For example, suppose someone devises a program that does searches on the web. The program asks a user what they want to search for. The program then provides the user a listing of various search hits or finds that hopefully showcase relevant websites based on the user query.

Imagine that the firm that makes this web searching program wants to spruce up its app.

They request access to the ChatGPT API. Assume they do all the proper paperwork and ultimately get approved. Their program that does web searches would then need to be modified to include a call-out to the ChatGPT app via the API. Assume they opt to make those mods.

Here’s how that might then work altogether. When a user enters a query for a web search into the core program, this program not only does a conventional search of the web, but it also passes the query over to ChatGPT via the API. ChatGPT then processes the text and returns a resultant essay to the core program. The web search core program now presents to the user two facets, namely the web search results and the additional outputted essay from ChatGPT.

A person using this core program will not necessarily know that ChatGPT was being used on the backend. It can happen within the confines of the core program and the user is blissfully unaware that ChatGPT was involved. On the other hand, the core program could be devised to inform the user that ChatGPT is being used. This usually hinges on whether the makers of the core program want to reveal that the other app, in this case, ChatGPT, was being used. In some arrangements, the maker of the being-invoked program insists that the API-invoking program must let users know that the other program is being utilized. It all depends on preferences and licensing particulars.

For genuine pairings, here are the customary approaches:

  • 1) Straight pass-thru to ChatGPT
  • 2) Add-on to augment ChatGPT
  • 3) Allied app that coincides with ChatGPT
  • 4) Fully integrative immersion with ChatGPT

Briefly, in the first listed approach, the idea is that I might devise a program that is merely a frontend for ChatGPT, and as such, all that my program does is passes the text to ChatGPT and gets the text back from ChatGPT. I make available my program for anyone that wants to use ChatGPT and who otherwise hadn’t signed up to use it. This is one approach.

Second, I might devise a program that serves as an add-on to ChatGPT. For example, when ChatGPT generates an essay, it might contain falsehoods. Suppose I craft a program that examines the ChatGPT output and tries to screen for falsehoods. I make my program available such that people enter a text prompt into my program, which then sends the prompt to ChatGPT. ChatGPT produces an essay that comes back to my program. Before my program shows you the essay, it prescreens the essay and attempts to flag or remove falsehoods. You then see the resulting essay after my program has done the screening.

The third approach consists of having an allied app that in sense coincides with ChatGPT. Suppose I develop a program that aids people in doing creative writing. My program provides canned tips and suggestions on how to write creatively. The program merely prods or spurs the user to do so. Meanwhile, what I’d like to be able to do is show the user what creative writing consists of. As such, I establish an API with ChatGPT. My program then takes a prompt from the user and invokes ChatGPT to provide a blub of an essay that might demonstrate creative writing. This could be done iteratively and invoke ChatGPT multiple times in the process.

In the case of the fourth listed approach, ChatGPT is fully integrated into some other program or set of programs. For example, if I had a word-processing app and a spreadsheet app, I might want to integrate ChatGPT into those apps. They would in a manner of speaking function hand-in-hand with each other. I’ll be covering in an upcoming column posting the floated possibility that Microsoft could opt to infuse ChatGPT into their office productivity suite, so be on the look for that coming analysis.

Those then are the key ways that a genuine pairing might take place.

Let’s next consider some of the fakery pairings.

Here are some overall fakery pairings that you ought to be watchful of:

  • Gimmickry Pairing With ChatGPT – marginally utilizes ChatGPT, mainly done for show and to garner publicity with no added value
  • Alluring Promises About ChatGPT Pairing – software vendor claims they are in the midst of pairing to ChatGPT, seeking to be in the shining spotlight, when the reality is that they aren’t going to do so and are doing a classic head fake and rendering a false pledge
  • Knockoffs Proclaiming To Be ChatGPT-like – rather than pairing with ChatGPT, some software vendors will use something else, which is fine, but they will attempt to imply it is ChatGPT when it is not, hoping to get some of the afterglows of ChatGPT
  • Other – lots of additional outstretched and conniving schemes are conceivable

There will undoubtedly be a lot of shenanigans going on about all of this. It will be part and parcel of the release of the Kraken.

Conclusion

Allow me to toss some wrenches and other obstructions into this matter.

What about the cost?

Currently, those that signed up to use ChatGPT are doing so for free. I have previously remarked that at some point there will need to be outright monetization involved. This might entail being charged a per transaction fee or maybe paying for a subscription. Another possibility is that ads might be used to bring in the dough, whereby each time you use ChatGPT an ad will appear. Etc.

Those that opt to establish a pairing with ChatGPT via the API should seriously be mulling over the potential costs involved. There is likely a cost pertaining to the use of the Microsoft Azure cloud for the running of the ChatGPT app. There is bound to be a cost from OpenAI to use the ChatGPT API and invoke the AI app. A software vendor will incur their own internal costs too, such as modifying their existing programs to use the API or developing programs anew around the pairing with ChatGPT. Envision both a getting started cost and an ongoing upkeep set of costs too.

The gist is that this layering of costs is going to moderate to some degree the gold rush toward leveraging the ChatGPT API. Software vendors presumably should do a prudent ROI (return on investment) calculation. Will whatever they can make via augmenting their program by using ChatGPT bring in sufficient added monies to offset the costs?

Not everyone is necessarily going to be quite so cost-conscious. If you have deep pockets and believe that your use of ChatGPT will propel your program into the most known or highly recognized realm of apps, you might decide that the cost right now is worth it. Build a name for your app by riding on the coattails of ChatGPT. Down the road, once your program is popular or otherwise making money, you either make up for the earlier lack of profit or write it off as the cost required to get into the big-time.

A small startup backed by a Venture Capital (VC) firm might be willing to fork over a chunk of its seed investment to get paired up with the ChatGPT API. Fame might instantly arise. Fortune might be a long way down the road, but that’s something to be dealt with later on. Grab the limelight when the getting is good, as they say.

One supposes that there might be non-profits and social enterprises that will decide to also kick the tires on this. Suppose a non-profit firm identifies a beneficial use of invoking the ChatGPT API that will seemingly support their altruistic or societally beneficial goals. Maybe they raise funds for this via an online funding campaign. Perhaps they try to cut a deal with the vendors so that they pay a nominal amount or get the use for free.

Time will tell.

The last pointer that I’ll leave you with is the risk factor.

I don’t want to seem unduly downbeat, but I mentioned earlier that the outputs from ChatGPT can contain falsehoods and have other potential downsides. The thing is, if a software vendor making an app gets mixed into using the ChatGPT API, they run the risk of having sour and dour outputs, which they ought to be anticipating. Do not put your head in the sand.

The troubles arise that those outputs could taint the app that opts to utilize them. In that sense, your app that is at first riding the glory of ChatGPT could also end up plowing into a brick wall if the outputs provided via ChatGPT are relied upon. Perhaps the outputs are presented to users and this causes a horrendous ruckus. They take out their angst on you and your app.

In turn, you try to point a finger at ChatGPT. Will that get you out of the conundrum? Once there is a stink, it permeates widely and few are spared. Along those lines, it could be that via a widened use of ChatGPT, the awareness of the foul outputs gets more commonly known. Thus, oddly enough, or ironically, the expanded use of ChatGPT due to the API could shoot their own foot.

I don’t want to conclude my remarks with a sad face so let’s try to shift into a happy face.

If all of the aspirational limits and constraints are mindfully and judiciously followed and adhered to rigorously, tapping into ChatGPT via the API can be a good thing. The chances are too that this will further spur other generative AI apps into action. The rising tide could rise all boats.

That seems upbeat.

You might know that Zeus was said to be in control of the Kraken. Sophocles, the ancient Greek playwright, said this about Zeus: “The dice of Zeus always fall luckily.”

Maybe the same will be said of how generative AI will inevitably land, let’s at least hope so.

Leave a Reply