Any rank predictor is only as good as the data behind it. So we wanted to be upfront about exactly how ours works – what numbers we’re using, how we crunch them, and where it can go wrong. You can try the predictor here, or just keep reading.

Where the numbers come from

Every year, ACPC puts out a long PDF that lists, for every branch and college, the marks and rank of the first and last student who got admitted there. So for example: at Government Polytechnic Ahmedabad, the first student to get a Computer seat had X marks and Y rank, and the last one had A marks and B rank.

Multiply that by every branch, every college, every category – and you end up with a few hundred real, official data points connecting marks to ranks. We use the 2024 and 2025 versions of this list.

Why not just use our leaderboard?

Our leaderboard only shows people who’ve uploaded their sheet on this site. That’s a small slice of the ~20,893 students who actually appeared, and it’s a skewed slice:

  • The people who go looking for an unofficial result checker are usually the more engaged ones, so the leaderboard skews higher than the real cohort.
  • Some branches upload more than others, so the mix isn’t representative either.

If we only used the leaderboard, high scorers would get a rank that looks worse than reality, and low scorers would get one that looks better. ACPC’s tables cover the actual ranked students, so they sidestep that problem.

Filling in the gaps

ACPC’s table doesn’t list every single score. It might have 145 and 142, but not 143.5. So for any score in between, we just draw a straight line between the two closest numbers we do have.

Example: if 145 marks gave rank 1,200 and 142 marks gave rank 1,500, then 143.5 is halfway, and we’d call it roughly rank 1,350.

Nothing fancy. We tried more complex math, but it doesn’t really help – and it gets weird near the edges of the table where there’s less data. A straight line between known points is honest about what we know.

Mixing two years together

One year of data isn’t enough, because every paper is a little different. The same 120 marks got pretty different ranks in 2024 vs 2025. If we only used one year, the predictor would inherit that year’s quirks.

So we use both, but weight the more recent year heavier:

  • 2025 – 67 % (most recent)
  • 2024 – 33 %

If we added 2023 later, it’d get half of what 2024 gets, and so on. Each older year counts half as much as the one after it. Simple and easy to extend.

Why we show a range

Mixing the two years gives us one number, but that number hides how much 2024 and 2025 disagreed. So instead of just showing the average, we run the calculation against each year separately and show you both:

Lower number – your rank if 2026 turns out like the year that was easier on your score.
Higher number – your rank if 2026 turns out like the harder year.

When the two years agree – like both putting 195 marks at rank 1 – the range collapses and you just see one number. When they disagree (which happens a lot in the 80–130 mark range), you see the gap. That gap is real uncertainty, and we’d rather show it than hide it behind a fake-precise single number.

One extra trick on the result page

When you upload your sheet, we know two things: your score, and where you sit on our leaderboard. That second piece lets us nudge the estimate a bit.

Say the history says your rank should be somewhere between #500 and #800, but you’re already ranked #1000 on our leaderboard. Your real rank in the full cohort can’t be better than #1000, because everyone ranked above you here is also in the full cohort somewhere. So we shift the range up to match – it becomes about #1000 to #1300, keeping the same width. We’re not making the prediction less uncertain, just moving it to a position that’s actually possible.

The standalone predictor doesn’t do this – it doesn’t know your leaderboard rank yet. It just shows you the pure historical range.

Where this can go wrong

No predictor is right all the time. Here’s the honest list:

  • 2026 might just be different. If the paper is way harder or way easier than 2024 and 2025, your real rank could fall outside what we show.
  • The number of students changes each year. 2024 and 2025 had different totals. We use 20,893 as the upper limit, but 2026 might be a different size.
  • Ties. If lots of people score the same total as you, ACPC breaks ties by BE01 marks, then BE02, then who had fewer wrong answers, then date of birth. We can’t tell where you’ll land inside a tie cluster.
  • It’s a general merit estimate. Branch and category cutoffs shift things again during counselling, so the rank you get from us isn’t the same as your seat-allocation rank.
  • Very low scores get fuzzy. If your score is below the lowest one in the ACPC tables, we just say “near the bottom” rather than guessing a precise number.

How to actually use this

  1. Get your total marks – either by adding them up yourself, or just uploading your OMR sheet so we do it for you.
  2. When you’re filling the ACPC preference list, assume you’ll get the worse end of the range. Then keep a stretch option on top in case you’re lucky.
  3. Read the branch-wise cutoffs guide and compare rank to rank, never marks to marks. Marks needed for a rank change every year, the rank itself doesn’t.
  4. Once ACPC publishes the official merit list, use that for the actual counselling. The predictor is just for the waiting period.

Why we’re being open about all this

Most rank predictors out there don’t tell you anything about how they work. Which means when they’re wrong, you have no way to tell why, or push back. We’d rather show the math so if something looks off, you can call it out.

Try the tool at /rank-predictor, or upload your OMR sheet on the homepage to get the same estimate with your actual graded marks.

Know your DDCET score in under a minute.

Upload the OMR sheet from your candidate portal and get BE01, BE02, total marks and your estimated rank.

Check my DDCET result