Why AI is a problem on Stack Overflow

The idea of using drafts to try to estimate the volume of potentially AI-generated posts is an interesting one, even though it has problems, and I wonder how much better the company’s response to ChatGPT could have been if they had been proactively adding similar indirect metrics to measure behaviors that might indicate changes in sites’ health. Of course then the company would have to define “healthy” and that may be difficult because it does not necessarily overlap with “generates measurable revenue growth”.

It makes me think of agile’s “Definition of Done”:

Definition of done is a simple list of activities (writing code, coding comments, unit testing, integration testing, release notes, design documents, etc.) that add verifiable/demonstrable value to the product.

Keeping an auditable list of what you mean when you say a task is “done” (or in this instance when you say a site is “healthy”) helps get people on the same page when measuring progress. Is a a site improving, stagnating or declining? Stack Exchange already has some metrics for getting a site through public beta on Area51, but for some reason those metrics lose visibility once a site graduates.

Is it more important to welcome new users than discourage posting of AI generated junk? How can you have that conversation if everyone in the room has a different idea of what a healthy site looks like? Also, keeping metrics in front of a community might have a benefit of engaged users working to improve those statistics.