So first of all Codex is absolutely amazing, especially when used inside Cursor. It’s like nothing I’ve experienced before, going away from the whole Vibe Coding to an actual team of Devs working with me, allowing me to develop projects on my own.
There is one major issue though: The usage limits. I understand cost feasibility is a major concern for the company, but how should AGI ever be hit if you can only use a feature for a few hours every week?
It seems like there are two limitations, daily and weekly. I just hit my weekly limit on Plus within 18 hours after Reset… Yep not even 1 Day until I have to wait 6 Days to use it again….
I coded quite a bit but obviously just for a couple hours, mostly using Medium Reasoning, sometimes Low and rarely High depending on the complexity of the task.
The Daily Limit is not much better honestly.
I cannot remotely fulfill my capacities with it right now so I have to rely on other models.
Now even though I’m still a student (please add Student discount Subscriptions btw, I’m certain it would be extremely popular) I would be willing to get the 200 € version, simply because of how much I like Codex in Cursor, but I haven’t found any info how much larger the Codex limits are with that in Cursor, does anyone have specifics?
The web version of Codex seems to have far larger Limits but it’s very buggy as it makes more mistakes somehow and most tests do not run correctly, the understanding of the codebase also always seems to be outmatched by Codex within Cursor. So using it is far less efficient and you cannot rely on it as much.
A few other general suggestions that would be helpful:
! A bar or number how much of the daily/weekly usage limit has already been depleted, so you don’t get interrupted in the middle of a big change!
Sound Ping whenever Codex needs Approval inside Cursor (There’s a cursor option but it seems to only work for their API Agents Chat)
Automatic Approval Mode for Code Changes (Right now it says “always allow” but it will still ask every time during the session), could even lead to Automation of Coding Agent Groups.
Branching Chats into new Chats to not hit Context limit so quickly, just like it was introduced for normal GPT-5.
Local Chats being saved, not just Cloud Chats, beyond closing Cursor. Doesn’t matter if I can continue within the same Chat but I often wanna look up previous fixes and changes. Also annoying if the app were to crash and you gotta start over again without knowing which changes were already made.
Most “delays/time” spent reasoning seem to usually be caused by Codex not being able to use the correct Terminal commands, not by actually writing Code/Text. I suppose this also eats up the context limit.
I’m running into the message: “You’ve hit your usage limit. Try again in 1 day 23 hours 37 minutes.”
The problem is that on the Team plan, as the plan administrator, I see no warning signs about consumption anywhere. /status doesn’t surface anything while you’re in session. That makes it hard to anticipate or manage usage proactively.
Hard Issues
Non-deterministic behavior
Even within the same model setting, if I ask it to do the same task across two sessions, I’ll get vaguely similar results, but the performance varies wildly. One session might take 30 seconds to iterate, the other 15 minutes.
Limits triggered by Codex CLI mistakes
I’ve hit usage limits primarily because of Codex CLI missteps. Sometimes it runs scripts without any sort of watchdog, burning through tokens in approach loops without giving user feedback or prompting; and some of the processes I invoke can have long running build processes, but not long enough to run separately and feed logs.
My Concern
While Codex CLI is still in its infancy (and no argument can be made otherwise), the current hard usage limits are not a good idea. The system should be more forgiving, especially when token burn comes from OpenAI tooling flaws rather than genuine overuse. Softer or more flexible limits would go a long way in supporting developers during this early phase. All of this especially when you can’t give a usage meter so that we can plan our sessions better.
I quit anthropic products for the almost the same reason.
The missteps seem to be happening more often while using the High reasoning model from my experience. It’s very capable but often seems to get into loops and endless tasks. For example yesterday it tried to analyze a fairly simple problem, but ended up opening hundreds of files to find a single string because the terminal search commands were used in the wrong way.
I ended up stopping the reasoning and simply gave it the Code snippet, after that it worked well again. It would be better if it would sometimes just ask the user for more context or help, when running into issues and trying to solve them iteratively, instead of burning through the entire usage limit.
Oh yeah btw this Bug happens quite often:
When looking into the Reasoning steps (which I would recommend) it sometimes thinks a file has been corrupted and wants to delete and rewrite it from memory.
The thing is the file is intact if you open it urself, it just wrote everything into one single line without returns. Therefore it cannot find the code when searching for line 20-40 in Terminal for example.
You can format it or cancel the request and send the code yourself to avoid duplication and other issues. Generally the current system of searching files through the terminal commands seems quite a slow and inefficient way and not very AGI like for the future.
I would highly recommend using Codex with VSCode. I use it on a Mac for everything these days. I mostly code in Matlab and Python, and the VSCode plugin with Agent mode feels pretty robust. However, I ended up here because I finally hit a usage cap on my Teams account yesterday, trying to find a way to see where I am with the limit. Like another user said, it is really hard to proactively manage and plan ahead for usage, or even choose the level of reason accordingly, if we can’t see where we are at.
Actually you said everything I wanted to say , I just saw codex for the first time two days ago and it was like a dream coming true it is like a group of developers are working with you
I am zero coding knowledge and I am building something big by just typing my mother lanugauge
but we need more credit I bough the business plan and my limit finished so I had to buy more just to complete few tasks
I bought a ChatGPT Plus subscription yesterday and used it for a short while without issues. Today, I ran into repeated session failures and then received this message:
I’m surprised to hit a limit so soon after subscribing.
If this is the normal allowance, the subscription doesn’t seem practical for developer work, and it’s far from what you’d expect for professional use.
Also — waiting 5 days?! How is that workable for anyone trying to complete an idea? so the only option to pay $200 for Pro? or Jump to different service ?
If that’s the case, I may need to cancel and consider the payment a loss.
I would easily accept the usage limit if I could monitor it while doing my work. But the bigger problem is that GPT-5 Codex is not mature enough, and we have to use several prompts to make the code work. When implementing something new on a page, sometimes the old features stop working. I’ve even come across an incomplete page that stopped working. So we end up spending most of our limits just trying to instruct Codex to do what we ask.
Just hit my first usage limit with Codex. I was using it pretty heavily the past few days. Now I have to wait 3 days apparently. It’s a shame there’s no way to see what the limit is. Being struck like that during a session is kind of annoying.
Does anyone know if that will reduce in a day or two, or is it set on that timeframe now?