What is something that you wish OpenAI or ChatGPT could do but they can't yet?

I’d love to hear your ideas on any feature you think should be included, apps you wish existed, or issues you’re having trouble solving with the AI tools available today!

Do you have a certain chore or pain point that you’d like to automate but are unable to solve?
Your suggestions may spur the development of fresh products or services that improve everyone’s quality of life!

hallucinations and responses that don’t meet all of the prompt’s requirements, like as ignoring certain information or failing to pay attention at all.

reasoning, even if using a different tool won’t address the problem. This is a significant area in which open AI failed. The rationale behind the response, the detailed explanation of how the response was arrived at, and other related information.

It’s going to accomplish everything. You only need to provide it with accurate cues.

Reading Gibson’s “Neuromancer” and “Count Zero” has made me yearn for a “five minute precis.” In five minutes, take a topic—perhaps something noteworthy—and create something that can be read or heard. The important things are to be truthful, detailed enough to provide you with a fundamental understanding, and free of unnecessary details.

We were promised that “intermediate” AI would help us learn and become informed, not to make us feel less intelligent or to divert our attention or try to sell us more unnecessary products.

A ‘five minute precis’ is an idea that I adore! Similar to receiving a condensed yet comprehensive synopsis of a subject. Something that makes difficult information easy to understand quickly and clearly without adding unnecessary complexity is precisely what we need from AI. I can’t wait to attempt developing this solution on my own. The main goal is to employ AI to provide us with clear-cut, practical knowledge so that we may remain informed and make wiser decisions.

Really, an unrestrained, no-holds-barred version.

I believe that the majority of people would be content to pay $100–200 a month for an entirely unrestricted version.
Technology is too vital to be held back by worries that someone might make a bomb or anything. Hell, you could just give the police all the information about anyone who has used it for such purposes. Limiting so many beneficial use cases on that basis is absurd.

The notion of “gating” this technology behind morality seems foolish to me.

We’d be foolish to believe OpenAI isn’t employing unconstrained models for their own gain, as by definition there’s surely an unrestricted o1 someplace in their labs.
In essence, then, the only people who can use it are those who can pay to build these models (openAI, state actors).