Skip to content

Launch Plan: Max AI #306

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
4 of 36 tasks
joethreepwood opened this issue Apr 2, 2025 · 13 comments
Open
4 of 36 tasks

Launch Plan: Max AI #306

joethreepwood opened this issue Apr 2, 2025 · 13 comments
Assignees
Labels
marketing Issues/PRs related to marketing and content.

Comments

@joethreepwood
Copy link
Contributor

joethreepwood commented Apr 2, 2025

MaxAI isn't super close to launch to my knowledge, and we don't yet have a pricing RFC. Target launch date is still just 'Late Q2'.

However, it's important to get this launch right and so I'm getting ahead with a rough launch plan now that we can continue to add to.

MaxAI is an interest product because the value prop is going to be around accelerating existing workflows and tying to a zeigeist. It isn't something where there will be a tangible business outcome that we can easily quanitfy, or where there'll be comparable tools we can replace.

Instead, it'll be a case of capturing the personal excitement and enthusiasm of users, and reflecting that into a broad awareness. Social media is huge for popularizing in this way, and for AI in particular - so we should explore how to lean into that.

Best practices

  • Ensure the product has at least one customer story within 3 weeks of launch
  • Ensure we publish best practice content for the product and link to it from docs
  • Ensure that products have at least one pre-made template (or similar) for users
  • Ensure the product has at least one tutorial at launch (Done via <MaxCTA /> component) - @joethreepwood
  • Ensure the product has a robust docs page, and product page if needed - @joethreepwood
  • Ensure the product is added to email and in-app onboarding flows - @joethreepwood

For onboarding flows I'll make a special step to add this to emails for non-technical users especially, as they'll be clearly able to benefit.

Template-wise, I imagine we should include a template description which teams can use to give Max AI context.

Customer story-wise, I'm interested to explore if there are influencer opportunities here to tie into the social angle.

Launch plan

Before launch - Deadline: 19 May

Launch day - Deadline: 22 May

A few tweaks to the usual here. I think we can invite older users who we'd normally exclude but who may have churned due to difficulty with the platform. I think we can also arrange a repeating social activity and tie-in with some other social efforts we have going on.

Follow-on activity

  • Add to onboarding flow in-app - TBD by @joshsny
  • Add to email onboarding flow (emphasis on non-technical users) - @joethreepwood
  • Write a launch retro post for the blog - @joethreepwood
  • Schedule some 'Prompt of the week' style posts to keep visibility high - @joethreepwood
  • Run 'Prompt of the week' as a regular section in the weekly digest - @joethreepwood
  • Add a 'Ask Max' component to the 10 most popular docs pages - @joethreepwood

Some big ideas

Max AI AMA
We could let Max AI takeover our Twitter for an AMA. We'd use him on a demo Hogflix account, let people ask question via Twitter, run it in, then surface the results back to them as replies. All we'd need for this is some demo data and engagement.

Unique Max prompt
A second idea would be to give Max a unique capability where users can prompt him and he'll create an image similar to the changelog frames for the user. They can then share these with us on social, or use it to share news at their org.

@joethreepwood joethreepwood added the marketing Issues/PRs related to marketing and content. label Apr 2, 2025
@patricio-posthog
Copy link

Thanks for starting this Joe - better early than late!

I suppose if Max AI ends up being a pay product (not available to all our users), it'd be similar to Teams, with a fixed monthly price? Pay $X per month and have Max AI at your hands to help you with everything at PostHog. It could be enabled as an add-on or even as its own product. But I think that's the way to do it (I can't imagine a Max AI pricing where you pay per usage...)

@joethreepwood joethreepwood self-assigned this Apr 3, 2025
@joethreepwood
Copy link
Contributor Author

I've heard a mix of things on that front, so will defer to @Twixes on what pricing may look like (if anything).

@joethreepwood
Copy link
Contributor Author

Chatted with Michael and think a great feature here would be working Max into all our tutorials with a suggested prompt. I've added this to the list above.

Thread: https://posthog.slack.com/archives/C01FHN8DNN6/p1743678702390879

@fraserhopper
Copy link

fraserhopper commented May 2, 2025

Hey guys, sorry to throw a bit of a dampener on this but I think we might need to potentially pump some of the breaks on here as our lawyers have given us some advice to make sure we have from the LLM providers answered explicitly, so we can accordingly update our DPA and our wording.

The questions they want us to ask our @PostHog/team-max-ai and/or our LLM providers are below. Maybe @PostHog/team-max-ai you guys know the answers to some of these.

  1. Could you please provide of copy of the terms (incl. data processing addendum (if any)) Posthog is expected to use to paper its relationship with the AI System provider?
  2. As part of provision of the services to Posthog, will the AI System provider perform any training or fine-tuning of its model? If yes, please describe (i) the data the AI System provider proposes to use (including what it consists of (software code, text, images, financial data, technical data, templates, formats etc)) and (ii) whether Usage Data and/or Customer Data will be used.
  3. If the answer to the above question is yes, will the AI System provider offer (i) “one brain” per Posthog customer or (ii) a “single common brain” pooling all Posthog customers under the same roof?
  4. Will the AI System provider log prompts and responses given to Posthog customers’ staff? If yes, please describe and indicate for how long this information will be kept
  5. Is any data input to, output from, or otherwise derived from use of the AI System (including the logs referred above) used for any model training purposes by Posthog? If yes, please describe (i) the data the Posthog proposes to use (including what it consists of (software code, text, images, financial data, technical data, templates, formats etc)) and (ii) whether Usage Data and/or Customer Data will be used.
  6. Will Posthog have adequate human oversight(s) to ensure that the AI System functions as intended and in a compliant manner? If so, please describe
  7. Have or will technical solutions be(en) implemented to address AI-specific vulnerabilities such as data poisoning, data corruption, model flaws, etc.? If so, please describe

I will contact the LLM providers next week when we have any answers we can get internally and hopefully then by end of week we can get these back to the lawyers to update the DPA & potentially the terms.

Our chat with our lawyers was a bit more eye-opening than I expected but the good news is we are learning this stuff before general release and not after!

@Twixes
Copy link
Member

Twixes commented May 5, 2025

I don't know all the answers, but here's what I can offer @fraserhopper:

ad 1: The terms are all standard API (business) use terms made public by OpenAI / Anthropic / Google Gemini.
ad 2: The terms of all those providers stipulate that they don't train on API inputs or outputs.
ad 3: I have no idea what constitutes a "brain" here.
ad 4: OpenAI retains prompts and responses for up to 30 days, Anthropic is 30 days, except for prompts flagged for potential abuse, for which it's up to 2 years, Gemini is up to 55 days.
ad 5: We aren't currently using any data for training, though can't exclude we will do some fine-tuning in the future.
ad 6. Yes.
ad 7. We're rolling out automated evaluations of AI output to monitor for such issues.

Hope this is helpful. Would like to prevent legal wrangling from blocking rollout

@fraserhopper
Copy link

ad 3: I have no idea what constitutes a "brain" here.
This is the concept of pooling data, does all the data get combined into one big PostHog organisation or are we separating out the end user data distinctly when it is going to the LLMs. Eg does any one customer's end-user data get mixed with any of our other customers???

Hope this is helpful. Would like to prevent legal wrangling from blocking rollout

Yeah we will be contacting the LLM providers this week and hopefully we get the answers back.

I hope this doesn't delay anything but it would be much better to delay a couple of days and be in full understanding of how the data is getting handled than to be mishandling customer's end user data.

@Twixes
Copy link
Member

Twixes commented May 5, 2025

Eg does any one customer's end-user data get mixed with any of our other customers???

Data doesn't get mixed between customers – specifically LLM generations always use data of only one specific project.

Definitely agree with handling the data in kosher ways. :)

@fraserhopper
Copy link

ad 1: The terms are all standard API (business) use terms made public by OpenAI / Anthropic / Google Gemini.

@Twixes I assume Cohere is the same here?

@fraserhopper
Copy link

Is any data input to, output from, or otherwise derived from use of the AI System (including the logs referred above) used for any model training purposes by Posthog? If yes, please describe (i) the data the Posthog proposes to use (including what it consists of (software code, text, images, financial data, technical data, templates, formats etc)) and (ii) whether Usage Data and/or Customer Data will be used.

ad 5: We aren't currently using any data for training, though can't exclude we will do some fine-tuning in the future.

Right so if theoretically we were going to do this "fine-tuning" what would we do and how would we do it - would this be using our customer's end-user data, if so this would require a pretty extensive re-write of our DPA, so would be good to understand this now.

@fraserhopper
Copy link

fraserhopper commented May 6, 2025

Will Posthog have adequate human oversight(s) to ensure that the AI System functions as intended and in a compliant manner? If so, please describe

@Twixes you answered yes here but you offered no description, can you please describe...

Same thing for

Have or will technical solutions be(en) implemented to address AI-specific vulnerabilities such as data poisoning, data corruption, model flaws, etc.? If so, please describe

@fraserhopper
Copy link

fraserhopper commented May 7, 2025

@Twixes Just flagging we need answers to go back to our lawyers with, I can tell them we have some time pressure here but would be good to get answers back to them about the details with Q5-7.

(sorry to push, just really don't want to hold up release here if we can avoid)

@Twixes
Copy link
Member

Twixes commented May 7, 2025

Ofc, answering @fraserhopper

I assume Cohere is the same here?

Yup.

Right so if theoretically we were going to do this "fine-tuning" what would we do and how would we do it - would this be using our customer's end-user data, if so this would require a pretty extensive re-write of our DPA, so would be good to understand this now.

Fine-tuning is feeding the same data to an external model that we're now passing in input prompts. The only difference is that the result is a fine-tuned version of that model instead of an output prompt (a fine-tuned version of a model is then theoretically better than non-tuned at outputting prompts for a specific task) – but the scope of data used isn't any different from regular usage.

"Will Posthog have adequate human oversight(s) to ensure that the AI System functions as intended and in a compliant manner? If so, please describe" - you answered yes here but you offered no description, can you please describe

I mean, this AI system can't interact with the outside world and doesn't decide on any sensitive topics like credit applications – it has the same human oversight and compliance protocols as the rest of PostHog systems, for example automated tests. I don't know what else is possibly needed for this question.

Same thing for "Have or will technical solutions be(en) implemented to address AI-specific vulnerabilities [...]"

I replied "We're rolling out automated evaluations of AI output to monitor for such issues." – is this insufficient, can you specify?

@fraserhopper
Copy link

Thanks @Twixes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
marketing Issues/PRs related to marketing and content.
Projects
None yet
Development

No branches or pull requests

4 participants