Stack Overflow's CEO doesn't understand Stack Overflow

[I've added an update after watching the announcement.]


This is a companion discussion topic for the original entry at https://jlericson.com/2023/07/26/not_understanding.html

I enjoyed the architecture astronaut description… I immediately thought of several people from past workplaces that will now eternally be floating high above the earth in space suits when I think of them :slight_smile:

I think that very few companies survive the loss of their founders unless the company culture they leave behind is healthy and they have mentored (looking for a less controversial word than “groomed” here) the person taking over for them to have the same vision for the company’s purpose. As soon as a company stops being led by people who actually use or participate in what it produces, its purpose changes from whatever it was to just making money and showing constant growth. They kill the golden-egg-laying goose because they don’t understand how the goose turns food and affection into gold.

I don’t think Stack Exchange’s upper management really understands how the community turns sand (questions) into pearls (knowledge that everyone wants to use to train their AI). What motivates someone to spend the time to find a question and write a high quality answer to it multiple times a week? Why does having a high reputation on a site have value to someone? I am skeptical replacing human connections with monotone AI generated text is going to make the sites more attractive to answerers, who are, let’s face it, far more valuable in aggregate to the network than the people who only ask questions.

The network shouldn’t aspire to be just another search engine that people dump questions into and get out quick responses of varying usefulness. It is so much more than that. It’s people building international communities to share knowledge and experience. It’s potentially creating a curated library of a huge portion of human knowledge. It’s very sad that the current leadership is chasing page views instead of building something for the future.

1 Like

My impression during the hiring process was that Stack Overflow investors were looking for someone to work out an exit plan. If so, they picked the right person! Prashanth usually says the right things in public, but really seems out of his depth when it comes to the day-to-day operation of a company.

Updated! Thanks.

1 Like

Okay… critical. What would you do differently? What would be your strategy to save Stack Overflow? Imagine you’re the CEO 6 months ago. ChatGPT is has decimated your daily active users. People are now posting GPT answers so that they can game the kudos reward system. You’re faced with a really complicated landscape. The data your community created has been used to train an AI system that now replaces a large part of the need for your community. The truth is that GPT can’t answer new questions about new technologies, it is only able to quickly search and synthesize the data that has already been accumulated on sites like yours. How do you focus your community to answer new questions or refine old answers? What do you do? How do you fix this?

It’s far too easy to point out one thing someone did wrong in your eyes, but what would you do differently?

I’ve written a lot about this in broad strokes. See the Parable of the rose garden for a start. I saw problems well before I left the company and my warnings fell on deaf ears. I also think ChatGPT is merely a trigger that set off a chain reaction which had been latent for many years. If it hadn’t been ChatGPT, something else would have caused this outcome instead.

What would I do now, if I were suddenly hired to be CEO? I’d start by focusing on the longterm problem of answer rates. It’s hard to remember today, but Stack Overflow used to be a miracle of quick answers. Not as fast as ChatGPT, perhaps, but the median time to first answer was 24 minutes. I don’t know what the number is today (because my query times out), but I’m guessing the number is much higher than that if you even get an answer. Solving this problem isn’t easy, but it’s a lot easier then cramming a GPT model into Q&A.

Speaking of which, I’d also consider reviving Stack Overflow Jobs. From what I understand it was a profitable product and a mature one. It seems easier to start the servers and call former clients than to build a new product from scratch.

Speaking of which, I’d also turn chat into a product. (Did you know Stack Exchange has chat? It does!) In my current job I use Microsoft Teams and Slack. (Yes both.) Roughly once a day I wish we could just use Stack Exchange chat, but we can’t because it’s not a product. Would it be profitable? I don’t know but the product itself has been in operation (with very little development) for years and I think it has advantages over all the competitors I’ve tried. At the very least it could be an add-on product to Stack Overflow for Teams.

Instead of building artificial interactions using LLMs, I’d look into building mentoring relationships between real human people. This would truly allow Stack Overflow to be a resource promoting learning and promote better interactions than the (somewhat confrontational) Q&A format. Meanwhile, I’d put resources into improving the Q&A experience as well. It’s the core of Stack Overflow and the only reason you and I are talking about it all.

While doing these things, I’d take about them incessantly and encourage every employee to talk about Stack Overflow’s serious challenges. And apologize for screwing up even if I wasn’t the person who screwed up. Finally I’d be skeptical of ChatGPT and point out that authentic human interactions are just better than talking to a bot.

Would it work? I don’t know.

1 Like

Are they malicious or incompetent? Yes, maybe, no, none of the above, both, and “it depends.”

In the initial case - first time the doctor writes the script without allowing for nurse input, or first time the CEO adds friction, perhaps unknowingly, to the operations of the company - it would most often be simple ignorance, not even incompetence.

After a few times of being told there is a better way - doctors could ask the nurses which method the patient responds to, or the CEO could ask their subordinates what problems could be caused by intended choice - and they fail to modify their behavior, it progresses into probable incompetence. If, after reaching that stage, when confronted by the reality of the choices they take corrective action, it remains mostly incompetence as they continue to repeat the same mistakes. They’re willing to correct the problem when it happens while seemingly being unable to predict that it will happen, again, for the same reasons as the last time, or the last dozen times.

On the other hand, after learning of the consequences, refusing to change behavior, and refusing to correct the current situation - the doctor indicates that it will be pills as written, regardless of the patient’s adaptability, or the CEO indicates that if the friction is too much the staff can just leave - it’s not mere incompetence. It might not be actual malice, which would require knowing more than general information. It could, rather, be an egotist’s drive, or a sociopathic disregard for others, or any of several other possibilities. It could, of course, be simple malice. Of course, even at the first point it could be malice, or any of the other possibilities.

The likelihood that it is malice is low in the beginning, as there are other options. As the other options fall one-by-one, like tin soldiers in a storm, the probability of malice increases. As advised by Sherlock Holmes, “Eliminate all other factors, and the one which remains must be the truth.”

2 Likes

The million-dollar question, then, is whether the GenAI drive is dictated by Prosus (in which case, no way realistically to do anything about it), or driven by Prashanth (in which case it could be useful to get him fired). Short of getting him to answer the question directly, is there any way to know or at least speculate usefully?

The interest in GenAI is coming from the investment markets; the CEO is just responding to it by reassuring investors that the company is doing something about it. Because of the current level of hype, it doesn’t matter who the CEO is, they will pursue some GenAI project. Their duty is to their shareholders, not the community that uses the site.