🔣AI Models, Product-Market Fit and Culture

Written on 03/10/2024
Bandan Jot Singh

Just like the data on which models are trained are not perfect and probably highly biased, the users that utilize these models are also not perfect and have human-biases.

… and we thought AI models only ate data-sets for breakfast, but that is becoming less true as we see AI models now having a personality of their own influenced by the beliefs and values of the company’s that build them

When we analyze big AI models out there backed by tech giants, we are now seeing that each AI model also has a taste of their own, and their personality is influenced by two factors:

  1. Culture of the company that’s building them.

  2. Product-Market fit that the model is trying to achieve.

Ofcourse the data and finetuning on which AI models are trained are factors as well, but that goes either under the category of culture, or the PMF problem being solved.

Let’s start with culture.

The 7 principles based on which Google builds AI applications:

1/Be Socially Beneficial 2/Avoid Creating or reinforcing unfair bias 3/Be built and tested for safety 4/Be accountable to people 5/Incorporate Privacy Design Principles 6/Uphold high standards of scientific excellence 7/Be made available for uses that accord with these principles

The challenge with such principles is that Google will need to build a layer of their own refinement on top of trained model to ensure culture. For example, if historically white race have been over-represented (Think of a general view on an American president, or an average silicon valley tech guy in 2000s), the Gemini model (in its image generation modal) will turn out results that are non-white or even sometimes anti-white because that goes with their second principle above. It will make sure that any historical contexts that have been purely dominated by white race is then refined to make sure model output is diverse (black, asians and more)

Besides the entertainment factor that it generated and the PR lashback; the underlying root-cause is the culture. Google obviously over-indexed on removing biases. The problem with a culture that almost becomes a cult mindset, is that it can harm product-market fit. Eventually what you stand for does get represented in your products.


You’re reading once a month free issue of Productify. If you’re a paid subscriber, you can get full access to past Productify issues on AI:
How to become an AI Product Manager, Guide to Generative AI in Product, Google getting disrupted by OpenAI, How does OpenAI make money?

Subscribe now



It’s cultural positioning also means that Google wants to be ‘safe’ and risk-averse. (I wrote about it her: Is Google getting disrupted in AI?)

Let us move to product-market fit now. Eventually, any company can create an AI model that 100% represents their culture and values, but if company has profitability on their agenda, they also need to make sure their models are accepted and utilized by users.

For couple of decades now, social media has given extra-ordinary power in the hands of users. User behavior and human psychology in general has confirmed that we are organisms of confirmation bias - we like to see more of what we like to believe.

Tiktok’s and Instagram’s of the world understand your browsing behavior and fine-tune content based on what will keep you hooked to the screens longer, as a result keep and grow advertising dollars in return for your attention.

These same users are now using and interacting with AI models. Each of this user is a different human being and has their own belief systems. But if AI models have their own personality, the personality may not be liked by large set of users and may be liked by some. So, the choices are clear for big tech companies:

  1. Do you create an AI model that appeases to the outcomes that users want to see more? (Tiktok/Instagram model)

  2. Do you want to create an AI model that is neutral, and more facts based, so users can go there to find accurate information? (Wikipedia model)

  3. Do you want to be more safe, more unbiased, more left-leaning or more right-leaning i.e with flavor that would attract only a certain user base? (Fox News being right leaning is a good example)

Microsoft’s Bing Chat (Now called Copilot) does offer users options to choose AI model’s personality:

and also represents one of the ways models can try to get to product-market fit faster - understand which personality of AI do users prefer?

Because just like the data on which models are trained are not perfect and probably highly biased, the users that utilize these models are also not perfect and have human-biases.

In above example, Microsoft is experimenting with personalities of different AI model companies: Creative (like ChatGPT), Balanced (close to Gemini) and Precise (be the Wikipedia). And if you look at Microsoft AI principles, they are not too far away from Google’s:

So while culture plays a role, how you implement products to achieve product-market fit is a different skillset. Google, for some reason, just released a single personality AI model with Gemini and now learning about what not do to, while Microsoft ended up experimenting with different personalities.

Talking about personalities and product-market fit, a peer-reviewed paper did analysis of political biases of 14 large language AI models:

And it found that:

  • Open AI’s ChatGPT to be the most left-leaning libertarian

  • Meta’s Llama to be most right-leaning and authoritarian

  • Google’s BART to be libertarian and neutral

You can look at the full paper here. So depending on which data sources the model is trained upon, it becomes better at its own bias and starts finding inaccuracies in information that is outside its trained dataset. So, a model trained on Fox news information will start finding inaccuracies in New York Times.

So, can we truly build a model that just represents the raw data truth, without having its own bias? Elon Musk thinks so. He expressed that he wants to develop xAI chatbot which is "a maximum truth-seeking AI that tries to understand the nature of the universe.

Eventually, the consumers of AI models will be the ones to benefit with choices of models that represent different worldviews and biases. Just like we choose a certain news channel or publication over another, we would end up choosing models that we most believe would give us the right output.

This is not purely about political biases or any other biases, even the output in general can be really creative (like ChatGPT writing poems) or not at all (Gemini’s perception that it tries to be too safe). So, an artist who has a bias for creativity would choose OpenAI even though the data set is not trained on most recent information, whereas if you want to base your work on recent events and want to be safe, you might go for Google’s AI.

But there could be a multi-personality solution to this as well. Why does an AI model need to have a single personality?

Just like Tiktok and Instagram shows you the content that you want to see (I still remember the day I liked a Ronaldo’s rocket shot video on Instagram and was served Ronaldo for next 3 days), AI companies would most likely go down that path - they would learn who you are and what you like to read - which kind of flavour of bias and would serve you outcomes that will make sure you stick to using the same AI model.

This is where a lot of existing social media companies building AI models will have a lot of advantage. Google knows you well and could also go down the path of custom layer on their AI model that matches your preferences, same goes with Meta’s Llama 2 (however it is open source).

The answer to product-market fit and general market success depends on how big these companies want to become (what is the addressable market for them?), and the bigger they want to be - the more customized AI model responses would have to be to suit user preferences. Today, OpenAI doesn’t have a product-market fit problem, but it is missing the user personality understanding that Google and Meta has better.


If you have reached this far, thanks for reading! Reply to this email or comment below. I would love to hear your views on AI models, culture and product-market fit. You can also reach out to me on ProductifyLabs@gmail.com for queries and collaborations.

Leave a comment

If you want to support this publication, I would love if you share it with your friends or on your social media profiles:

Share