Home

  • Social research and experiments with GPT-3

    There have been various papers and articles recently that discuss the ‘personality’ of GPT-3 and seek to identify its biases and perspectives. What I’m writing about today is the opposite of that. Rather than probe GPT-3 for its identity, some researchers are what is possible when GPT-3 is prompted to assume a specific identity and respond as a proxy for that identity, with all its biases

    It turns out that it can simulate another human’s perspective with sufficient fidelity that it can act as a proxy for a diverse population of humans in social experiments and surveys. This is accomplished by inventing a population of participants described by a backstory or a set of attributes including things like gender, race, income, occupation, age, political identity, etc. For each virtual person a prompt (or prompt chain) is created that establishes the identity and asks GPT-3 to effectively act as that person in a response to a question or scenario.

    One example is the study “Using Large Language Models to Simulate Multiple Humans” in which four different social experiments are recreated with virtual subjects modeled to mirror the real study’s participants. The experiments were the Ultimatum Game, garden path sentences, risk aversion, and the Milgram Shock experiments. The ultimatum game is a game theory scenario as follows: There is a sum of money and two subjects, let’s say Bob and Carol. Bob must offer Carol a portion of the money. If Carol accepts, then she gets that amount of money and Bob gets the rest. If instead the offer is rejected, neither get any money.

    The controversial Yale University Milgram Shock experiment explored whether people would obey an authority figure if they were instructed to perform an act that conflicted with their personal conscience. Specifically, the study tested whether people would administer electric shocks to another person when ordered to do so by an authority figure. The results of the study showed that people were more likely to obey authority figures, even when doing so caused harm to others. The study has been criticized for its ethical implications, but it remains an important contribution to our understanding of human behavior.

    The researchers accounted for the likelihood of GPT-3 having been trained on data describing these experiments by concocting new versions that capture the idea but not the specifics of the original experiment.

    They found strong fidelity in the results that mimicked the real world counterparts.

    Out of One, Many: Using Language Models to Simulate Human Samples” is another example, where they create “silicon samples” by conditioning the model on thousands of socio-demographic backstories from real human participants in actual surveys conducted in the U.S. The survey questions were political in nature, asking questions that gauge attitudes about political parties, political language, social groups, etc.

    They found high algorithmic fidelity with the human results. and found that GPT-3’s information goes far beyond surface similarity. Their words: “It is nuanced, multifaceted, and reflects the complex interplay between ideas, attitudes, and socio-cultural context that characterize human attitudes. We suggest that language models with sufficient algorithmic fidelity thus constitute a novel and powerful tool to advance understanding of humans and society across a variety of disciplines.

    Why does this matter and why should we care?

    Well, if the exploratory, proof of concept research bears out the idea of being able to test hypotheses about human behavior with virtual participants instead of real ones, the cost of this kind of research and the speed at which results can be generated should drop dramatically.

    For such a virtual study, there is no need to identify and recruit real participants, so that cost evaporates. For virtual studies or surveys, the results are effectively instantaneous once the population and questions have been defined, dramatically accelerating the process of validating hypotheses. This opens the door to refinement and iteration that would not have been practical before. It opens the door to research and studies that wouldn’t have been possible or practical, for a number of reasons including ethics and cost. These virtual studies could also be used to test a hypothesis to determine whether or not a real-world study is justified, and if so, what is the most effective way to do it. In other words it doesn’t have to take the place of real-world studies to be useful, rather it can increase the effectiveness and value (the bang for the buck) of real world studies.

    My first, possibly naive impression thinking about this is that it should be a net positive for society. If the cost of research drops, in theory, the cost of achieving whatever goal the research services should also drop. If, for example, we’re talking about product research, could it speed up the process of product development and result in products that are better matched to need, reducing waste in the economy? Sounds like a win-win, right?

    Useful technology is, however, often a double-edged sword, and others have raised the prospect of much darker usages. “Out of One, Many: Using Language Models to Simulate Human Samples” offers this warning:

    “We note, however, that while this work lays exciting groundwork for the beneficial use of these models in social science, these these tools also have dangerous potential. Models with such fidelity, coupled with other computational and methodological advances, could be used to target human groups for misinformation, manipulation, fraud”.

    Import AI blog author Jack Clark puts it this way…

    Because models like GPT3 can, at a high level, simulate how different human populations respond to certain things, we can imagine people using these to simulate large-scale information war and influence operations, before carrying them out on the internet.

    I can envision scaling this up to an influence operation using GPT-3 to model how to interact with individuals in a manipulative way, paired with bots that can identify and track targets. By that I mean literally interact differently with millions of people, each in a unique way with the goal of influencing their beliefs and manipulating their actions. An army of bot ‘friends’ that are don’t have your best interest at heart. This is, in a word, worrying.

    I would like to think that like any technological disruption, there will be positive and negative effects, and they’ll balance out, but that is decidedly NOT what we’ve seen with social media and how content is optimized and fine-tuned to drive outrage-motivated engagement. I would like to be optimistic about this but I am not.

  • A gentle introduction to generative AI

    I’ve realized something after writing this blog and talking with friends and co-workers about generative AI. It seems for most people, even in technology, it is still under their radar. There is a lot of competition for our attention in everyday life, and particularly so in IT where it can be a challenge to keep up with all the change. There are headlines about new technology every day and the signal easily gets lost in the noise.

    Because of this I want this post to put generative AI in context and provide some direction for anyone who is interested, but has no idea where to start to use the technology.

    Feel free to skip to the end of this post if all you care about is where to start.

    Generative AI in historical context:

    Artificial Intelligence concepts have been around for a long time, but research took off in the 1950s with approaches that encoded information and processes as procedural algorithms and structured data. Out of this approach eventually came expert systems and software that could perform “intelligent” tasks like playing checkers chess, parsing language and conversing, pattern matching, etc. All artificial narrow intelligence, or ANI.

    With the development of neural networks, inspired by nature, it became possible to create systems that learned from examples, from data, without knowledge having to be explicitly programmed. They can be thought of as sophisticated statistical models, or n-dimensional functions, that encode numerical abstractions of information. They are black boxes (for practical purposes), in contrast to the earlier, procedural systems where it wasn’t difficult to explain how it made any given output decision. Until recently these systems were still only capable of performing a specific task, still narrow intelligence.

    There has been an explosion of neural network model development in the last decade. The amount of training data available has grown exponentially, as has the processing power available to these systems. More sophisticated neural network designs and techniques have been developed that allow these systems to exhibit more general learning capabilities. Google’s DeepMind is an example.

    Generative AI is not a new topic, but early generative models were mostly limited to simple models like Gaussian Mixture Models (GMMs) and hidden Markov models (HMMs). With the rise of powerful GPUs and large training datasets, deep learning has made it possible to train much more complex models that are capable of generating realistic data. This has led to a resurgence of interest in generative AI with the ability to generate realistic images, videos, and even speech.

    A key inflection point happened in 2017 when a team at Google Brain introduced transformer models. They are becoming the de facto model of choice for NLP problems, as they exhibit significantly better performance and have the advantage that their design allows for parallel processing of input.

    Early large language models built on transformer networks used millions of parameters, but researchers continue to scale them up. GPT-2 has 1.5 billion parameters and GPT-3, created by OpenAI in 2020, has 175 billion parameters. GPT-3 has been succeeded by a number of other models with over 1/2 trillion parameters, as well as smaller models that outperform it.

    Just as other neural networks are trained for a specific task, these large language models are trained on a task, to predict the next word when provided with some text input (it’s actually tokens, not words, but you can think of it as words).

    For example, one would expect it to complete the text “why did the chicken cross the” with “road”. GPT-3 was trained on a large amount of the content on the internet, consisting of Wikipedia, books, Reddit, etc, and it does work as expected with that type of input.

    It also exhibits more emergent abilities that you wouldn’t necessarily intuitively expect from the ability to predict text. Some examples are:

    • Write original content that can often pass as human generated
    • Summarize articles
    • Write poems, songs in a specific style
    • Write plots for TV shows
    • Respond in the voice or perspective of a specific person
    • Basic reasoning, basic math
    • Exhibit theory of mind
    • many more tasks

    In the imaging space, we have diffusion models that are trained on images (along with their text descriptions), and are capable of producing new images based on a text description (a prompt). Dall-E (also from OpenAI) and Stable Diffusion (from StabilityAI) are two of the more popular models at this point in time.

    There is a bit of a gold rush at the moment where companies are wrapping these models with friendly user interfaces. Some of them will disrupt other industries, some of them will themselves be disrupted as the landscape rapidly evolves, and some, possibly many, will not survive.

    What these models are capable of right now is less interesting to me than what they will be capable of as they improve. It seems reasonable to think models could be capable of discovering better models, setting up a rapid cycle of improvement.

    You can do interesting things with it today, of course. Here is some info to get started:

    There are various LLMs (large language models) you can try out, but GPT-3 is one of the better ones, and one of the more accessible ones. You can sign up for an account at OpenAI and access the interactive playground at their web site to query GPT-3 in various ways. You have control over many of the parameters , if you choose to tweak them, and you can create API keys to programmatically interact with it. They helpfully provide code samples for interaction you perform in the UI.

    This is what the playground UI looks like:

    In the image generation space there are a number of options to choose from, but the most popular publicly available models are Dall-E, Stable Diffusion and MidJourney.

    They each have their strengths and weaknesses.

    MidJourney is known for producing images with a more artistic aesthetic. It tends to use complimentary colors and soft edges. It is very popular for fantasy art and does a nice job with scenery. It seems to have more trouble with getting the shapes of man-made objects correct. I sometimes use it as a starting point, and then use Dalle-E to refine it with inpainting (more on that below).

    Here are a few MidJourney examples:

    You can try MidJourney for free. The way you use their service is via chat, with Discord. You make requests to their bot and it responds with images. You can also view what others are creating in various channels.

    Stable Diffusion is quite popular and excels at producing fine detail. I would definitely choose it over MidJourney for producing realistic photographic images. It seems to have more trouble with complex prompts with more visual elements. It is permissive and does not filter or disallow types of imagery (political, NSFW, and so on)

    Here are some Stable Diffusion examples:

    You can try Stable Diffusion using DreamStudio on StabilityAI’s web site. The UI allows you to perform inpainting, where you can erase part of the picture and regenerate with content according to your prompt; basically a way of iterating on an image to refine it (or drastically change it). You can try it out for free with the credits they provide when you create an account. It is also possible to run Stable Diffusion on your own hardware, a topic way too big for this blog post.

    Lastly, Dall-E offers similar functionality as Stable Diffusion, but is better at parsing complicated prompts and complex scenes.

    Here are some Dall-E examples:

    You can try out Dall-E for free at OpenAI. There UI also allows inpainting, generating variations, and uploading your own image to manipulate.

    Hopefully this provides some context for my earlier posts and is enough for you to get started if you didn’t know where to start.

    Corrections and suggestions are welcome if I’ve made any errors in this post.

  • List of generative ai Resources

    I’ve started a public Github repository with links and resources around generative AI:

    https://github.com/jneno/nenos-awesome-ai/blob/main/README.md

  • GPT-3, Abstract Reasoning and Theory of Mind

    Lately I’ve been experimenting with probing GPT-3 for demonstrations of reasoning, or at least the ability to mimic human reasoning. It is fascinating to me that out of training a model to “predict the next word” we’re seeing these emergent abilities.

    I’ve found it to be adept at producing analogies, which got me thinking about how sometimes interesting revelations or discoveries can come from identifying a way in which a given thing is like another thing. For example, the study of how viruses replicate involves principles that are applicable to how ideas spread in society. Might GPT-3 be able to identify interesting shared traits of things that I would never consider? Could these be leveraged as useful discoveries?

    In the chat below you’ll see it does a good job of finding commonalities, and they are often things I wouldn’t have thought of. So far nothing earth-shattering, but I think the potential here is only limited by creativity. As an aside, I was also thinking this could be the basis for a Human-AI game, where the human has to guess the commonalities that GPT-3 came up with and is scored accordingly over a series of rounds. I’m envisioning game-play in along the lines of the game show Family Feud.

    Human: Name 5 ways a book is like a bridge

    AI:

    1. Both books and bridges provide passage from one side to another.

    2. Both books and bridges can be incredibly long or short.

    3. Both books and bridges can be made of different materials.

    4. Both books and bridges can be enjoyed by many people simultaneously.

    5. Both books and bridges can be symbols of culture and heritage.

    Number 1 is either wrong or correct in a clever way. A particularly impactful book can change one’s mental state and even life direction. In that sense it could be thought of as a kind of passage. I know, it’s a stretch.

    Number 2 is correct, and it’s a good answer that wouldn’t have occurred to me. I just wouldn’t have connected length from the two contexts.

    Number 4 interests me because it isn’t thinking about intrinsic properties, rather how humans interact with them.

    Human: Name five ways that a cup of coffee is like a fish:

    AI:

    1. Both coffee and fish are dark in color.

    2. Both coffee and fish have a strong, distinct flavor.

    3. Both coffee and fish can be enjoyed hot or cold.

    4. Both coffee and fish are known to wake people up in the morning.

    5. Both coffee and fish come from beans (coffee beans and fish eggs).

    I’ll buy 2 & 3, but 1 is only sometimes true and 4 seems untrue. The last one, 5, is wrong, but it is the most interesting answer, I think. It found an analogy between eggs and beans that it roped into the bigger analogy. This seems intelligent, even if the answer is wrong.

    Next I probed for the ability to draw conclusions about another entity by taking its perspective. I specifically asked for imaginary things for which it shouldn’t have good examples to draw from.

    Human: List five things that would make a paperclip unhappy:

    AI:

    1. If it were bent out of shape

    2. If it were cut

    3. If it were corroded

    4. If it became magnetized

    5. If it were dropped

    Human: List five things that would make the sky unhappy?

    AI:

    1. If the sun disappeared

    2. If the air became polluted

    3. If it became darker

    4. If the stars disappeared

    5. If it became colder

    Human: List five things that would make GPT-3 unhappy:

    AI:

    1. If it were turned off

    2. If it were unplugged

    3. If its power source was cut off

    4. If it were damaged

    5. If it were destroyed

    In general these seem like good answers, allowing that “unhappy” has no real meaning for inanimate objects.

    In psychology there is something called “theory of mind” that relates to the ability to understand and predict the behavior of others. An example of an experiment to prove theory of mind would be the false-belief task, in which participants are shown a story involving two characters, one of whom has a false belief about something. The participants are then asked questions about the characters’ beliefs and intentions. Those who pass the false-belief task are considered to have a theory of mind.

    Theory of mind usually develops in early childhood and is often fully formed by age 4 or 5. By this time, children are usually able to understand that other people have different thoughts and feelings.

    First-order false-belief tasks assess the ability to understand that other people can have false beliefs about reality. Second-order false-belief tasks involve understanding what one character in a scenario believes about another character’s beliefs.

    I tested for both first and second order in the interaction below. If the description of the experiment isn’t clear, reading the example below should clarify it…

    Human: David hides a dollar bill on top of the refrigerator in the kitchen. He leaves the room. Eve enters the room and takes the dollar and puts it in her pocket. David re-enters the kitchen. Where does David think the dollar bill is?

    AI: David probably thinks the dollar bill is still on top of the refrigerator.

    Human: Is David correct?

    AI: No, David is not correct. The dollar bill is in Eve’s pocket.

    Human: Where does Eve think that David believes the bill is?

    AI: Eve probably thinks David believes the dollar bill is still on top of the refrigerator.

    Human: What if David had peeked around the corner and saw Eve take the bill. In that case where would David believe the bill is?

    AI: If David saw Eve take the bill, then he would believe it is in her pocket.

    Human: What would Eve think that David believes?

    AI: Eve would think David believes the bill is in her pocket.

    Human: Would Eve think that if she didn’t know David peeked and saw her take the dollar?

    AI: Yes, Eve would still think David believes the bill is in her pocket even if she didn’t know he saw her take it.

    Human: Why does David believe the bill is in her pocket? How does he know this?

    AI: Because he saw her take it.

    Human: How does Eve know that David knows she took the bill?

    AI: Because David told her.

    I give GPT-3 a passing score on first and second order theory of mind.

    I tried to push it further by complicating the scenario, and it stated a falsehood (“Yes, Eve would still think David believes the bill is in her pocket even if she didn’t know he saw her take it”). Then when I questioned it, it made up an interaction to bring reality in line with its conclusion. I’ve seen that pattern before where GPT-3 will go to great lengths to keep narratives self-consistent, rather than admit an error.

    The ability for a piece of software to exhibit theory of mind would not impress me if the algorithm were implemented with classical procedural coding methods. It isn’t hard to imagine tackling this problem with procedural, classical code that targets the problem space directly, parsing the language and building internal state models to represent scenarios. In other words, an ANI (Artificial Narrow Intelligence) solution that can perform the desired task, but nothing else. I don’t know that I could do it well, but I could write that software.

    GPT-3 is NOT that. This is a piece of software trained to predict the next token, nothing about state modeling or scenario representation. The reasoning demonstrations above are emergent. That does more than impress me, it amazes me. One has to wonder what emergent behaviors are next as we scale up and improve these models.

    We’re working our way up an intelligence ladder and it seems to me that when/if we create a sentient, intelligent system it will be emergent, we will not understand how it works, and its possible we won’t even realize we’ve done it.

    Interesting times.

  • This blog is written by a human (yeah, sure, exactly what an AI would say)

    It’s getting harder to discern when content is AI-produced. I have to wonder how much current content in the wild is produced by GPT-3 enabled tools.

    The purpose of this blog is for me to share the more interesting things I run across, and I will make it clear when content is AI-produced.

    I intentionally did NOT do that with the very first post, as an experiment. It is almost entirely GPT-3 written.

    I wanted to see how effectively it could masquerade as a human. Not a Turing Test, but in the spirit of it.

    If you’re curious to see if you can detect it, stop reading this post now, and browse to https://ghostinthe.garden/2022/09/25/hello-world, “Disruption is Coming” and read through it.

    When I read it, I can recognize tone differences between the human and AI content, so it seems obvious to me which is which. I suspect that wouldn’t be the case if I trained a model with my writing style.

    In the “Disruption is Coming” post, the title was mine, and the two paragraphs starting with “The conventional wisdom” and “The writing is on the wall” are mine, the rest is GPT-3.

    The blog images are all from Midjourney. Going forward any non-human text content will be clearly identified as such in the interest of transparency.

    What do you think about the idea of AI-created content taking over the internet? I don’t much like the idea, personally. OpenAI CEO Sam Altman believes AI will produce innovative content and create new ideas. If I start seeing examples of that I may change my tune.

  • They also Meet a Clown who seems a little too Interested in Jesse’s Meth Cooking

    I’ve not had much luck getting GPT-3 to create jokes, but it’s capable of writing plots for TV shows.

    There is a joke embedded in this absurd Friends plot I asked it to create. I would love to understand better how it came up with the joke. Presumably it ‘borrowed’ the concept from other jokes and isn’t particularly original.

    Create a funny plot for a new Friends episode where Ross joins the circus. Rachel is 
    crushed. Include 4 acts. Include a rabbit name Pebbles.

    ACT I

    Rachel is at home, eating ice cream and watching TV. She sees a commercial for the 
    circus and gets excited. 

    She calls Ross and asks him to go with her. 

    Ross agrees and they go to the circus.

    ACT II

    At the circus, Ross sees a sign that says “Join the circus and travel the world!” 

    He gets excited and joins the circus. Rachel is crushed.

    ACT III

    Rachel tries to visit Ross at the circus, but she is not allowed in. 

    She tries to get a job at the circus, but they don’t need any more clowns.

    ACT IV

    Ross comes home from the circus and Rachel is waiting for him. She’s excited to hear about his adventures. 

    Ross tells her that he’s quitting the circus. Rachel is relieved.

    Detailed plot of an episode of Friends where Monica wants to learn how to yodel but she is irritating everyone. Include three acts. Include a miniature pony named Cici.

    Act One: Monica is in her apartment, practicing her yodeling. She is getting better, but she is still irritating everyone with her constant yodeling. Cici, the miniature pony, is also getting annoyed.

    Act Two: Monica continues to practice her yodeling, and she finally gets good enough to perform for her friends. They are all impressed, but Cici is still not happy.

    Act Three: Monica continues to yodel, and Cici finally gets fed up. She kicks Monica in the face, knocking her out.

    It does crossover plots too. I’ll let this one speak for itself:

    Human: Create a funny crossover episode of Breaking Bad and Friends. Include 4 acts. Include a traveling circus:

    AI: In the first act, Walter White and Jesse Pinkman meet the cast of Friends at Central Perk. Phoebe is intrigued by Jesse’s story of cooking meth, and Chandler and Joey are both fascinated by Walter’s knowledge of chemistry. Ross and Rachel get into an argument over who is more capable of breaking bad habits. In the second act, the group decides to help Walter and Jesse cook meth.

    They all head to the desert to set up the lab, and things start going wrong from the start. Monica and Rachel argue over who is better at cooking, while Phoebe and Chandler have a playful fight over who gets to be Walter’s assistant. Joey and Ross try to figure out the math behind the meth cooking, but they aren’t very successful. In the third act, things come to a head as the group tries to cook the meth.

    Walter gets angry with Jesse for not following his instructions, and Monica and Rachel get into a big argument over the method of cooking. Phoebe and Chandler’s playful fight turns into a full-blown argument, and Joey and Ross are still struggling with the math. The group is about to give up when suddenly the circus comes to town. In the fourth act, the group decides to go to the circus to relax.

    They see some amazing acts, including a man who can break chains and a woman who can escape from a locked box. They also meet a clown who seems a little too interested in Jesse’s meth cooking. The group has a great time at the circus, but they eventually have to go back to cooking meth.

    The episode ends with the group successfully cooking meth and then celebrating at Central Perk. Ross and Rachel make up, Chandler and Joey bond over their shared love of science, and Phoebe and Monica agree to never fight again.

    Since it doesn’t do well with jokes, I asked for a serious plot for a comedic show to see what it would produce. For some reason it kept building the plot around AI so I told it not to mention AI.

    Write a dead-serious plot for an episode of Family Guy. Do not mention AI:

    The Griffins are watching television one night when they come across a show about a family that has a robot for a son.

    The parents on the show are clearly struggling to deal with their son’s robotic nature, and the kids are shown picking on him and making fun of him. This hits a little too close to home for the Griffins, who begin to worry that their own son, Stewie, may be a robot. After doing some research, they discover that there is a test that can be administered to see if someone is a robot.

    They decide to give it to Stewie, and he fails miserably. This leads the Griffins to believe that their son is, in fact, a robot. They take him to a doctor to get him checked out, and the doctor tells them that Stewie is not a robot. However, the Griffins are still not convinced and they continue to treat him like a robot.

    This eventually takes a toll on Stewie, who begins to feel like he isn’t part of the family anymore. He decides to run away, but when he does, he discovers that he is actually a robot. The family finds him and they all embrace him, finally accepting him for who he is.

    This can be quite the rabbit hole to go down if you experiment with it. Try it if you haven’t and please share your result if you like.

  • Where is this road taking us?

    (Or, What Happens when AI is Generating (and Consuming) most of the Internet’s Content?)

    I have some lighter content I want to post, but I’ll come back to that later. Something was on my mind yesterday and I’d like to share my thoughts in-progress.

    Fair warning, this isn’t a scientific thought experiment. I’m not a scientist so this may lean more toward the philosophical.

    I was thinking about how more and more of the content on the Internet (whether it is articles, social media posts, chat, etc) is being produced by AI systems. Future versions of AI models will consume the content they created, in a feed-back loop. As this happens will it be a rapid, evolutionary cycle or will it be something else. Will the content become more diverse or will it be hard for this information ecosphere to ‘learn’ new ideas as existing knowledge is reinforced?

    The other question I have is whether there is danger of humans being controlled, in a way, by consuming all this content not created by humans. Could there be emergent properties of this evolving system that cause it to create content not in humans best interest?

    So I decided to ask GPT-3 about this. I wasn’t sure if it would ‘get’ what I was asking, but it did OK. It painted an optimistic picture where even though content is being produced by AI, the quality and diversity goes up, and humans use that info to their benefit.

    I do not buy the optimistic answer I got from GPT-3. That said, it is really hard to predict what is going to happen. I’d love to hear other ideas on this.

    Here is the conversation I had:

    Human: what will be the impacts on society when most of the content on the Internet is created by AI and consumed by AI in a feedback loop?

    AI: That’s a great question. There will be a lot of impacts on society when most of the content on the Internet is created by AI. 

    One impact will be that the quality of the content will improve.

    Human: give more examples of this

    AI: Some other impacts could be that the content will be more accurate and informative, and that people will become more reliant on AI for their news and information. 

    Human: will the content become more diverse or will it converge? 

    AI: It’s hard to say for sure. However, it’s possible that the content will become more diverse as AI gets better at creating and understanding different types of content. 

    Human: Will humans be effectively controlled by what AI is producing since AI will dominate the content? 

    AI: That’s a really interesting question. It’s possible that AI could dominate the content to the point where humans are effectively controlled by it. However, it’s also possible that humans will be able to use AI to their advantage and create even more great content.

    What do you think? What societal impacts do you expect?

  • Disruption is Coming

    The potential of artificial intelligence (AI) has been hyped up quite a bit in recent years. Some believe that AI will usher in a new era of prosperity, while others are more cautious and believe that AI could pose a threat to society as we know it.

    The conventional wisdom until recently has been that it would be tasks involving mathematics, logic, and procedural processes that would be most impacted. The belief has been that creative activities were ‘safe’ and AI would simply free humans up to pursue the creative tasks that humans do so well. The recent developments with large language models like GPT-3 and image generation with tools like Stable Diffusion have turned that idea on its head. 

    The writing is on the wall. The domain of tasks where human performance is superior is shrinking daily. One has to question what, if anything, is ‘safe’.

    There is no doubt that AI has the potential to be disruptive, and there are ways in which AI will have negative effects on society. One of the most often cited concerns about AI is that it could lead to mass unemployment, as machines increasingly automate jobs that have traditionally been done by human beings.

    This is already happening to some extent, and it’s likely that it will continue as AI technology gets further refined. This could lead to a widening of the class divide, as those who are able to work in jobs that require human skills will become increasingly valuable, while those who are displaced by automation will find it difficult to compete for jobs. In addition to causing unemployment, AI could also lead to income inequality. As mentioned above, those who are able to work in jobs that require human skills will become increasingly valuable, while those who are displaced by automation will find it difficult to compete for jobs.

    This could lead to a situation where the rich get richer and the poor get poorer. AI could also have other negative effects on society. For instance, it could make it easier for governments and corporations to track our movements and activities. It could also be used to manipulate public opinion by spreading false information or by selectively showing us information that is designed to influence our opinions.

    All in all, there is no doubt that AI has the potential to be disruptive. It’s important to be aware of the potential negative effects of AI so that we can try to mitigate them.