Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

ENOB vs Resolution

+3
−1

I'm trying to clarify my understanding of the relationship between ADC resolution and ENOB (Effective Number of Bits), particularly when it comes to practical calculations. From what I understand, ENOB represents the actual performance of an ADC accounting for real-world imperfections like noise and distortion, while the resolution is the theoretical bit depth of the converter. Since ENOB is invariably less than the stated resolution, I'm confused about which value should be used in calculations.

Specifically, when calculating the LSB (Least Significant Bit) step size, should I be using: 1 LSB = V_ref / 2^Resolution (the theoretical value) or 1 LSB = V_ref / 2^ENOB (based on effective performance)?

If ENOB is the more meaningful metric for actual performance, why do datasheets prominently feature the resolution specification? Is the resolution purely a marketing/theoretical spec, while ENOB tells the real story? Or do these two parameters serve different purposes in system design?

I'd appreciate any insights into when each specification is relevant and how you approach LSB calculations in your designs.

My current understanding:

Resolution defines the theoretical quantization - a 16-bit ADC has 2^16 discrete output codes, so with a 1V reference:

1 LSB = 1V / 2^16 = 15.26 µV (the step size between codes)

ENOB represents the actual usable performance accounting for real-world noise and distortion. If the same 16-bit ADC has ENOB = 14 bits, then:

Effective noise floor ≈ 1V / 2^14 = 61 µV RMS. Signals smaller than ~61 µV will be buried in the noise (hard to distinguish from random fluctuations)

This means that while the ADC outputs 16-bit codes with 15.26 µV steps, the inherent noise causes the readings to fluctuate by approximately ±2 LSB, making the effective resolution equivalent to a cleaner 14-bit converter.

Is this correct?

History

1 comment thread

Undefined term. (2 comments)

1 answer

+3
−0

Your understanding is largely correct. You have touched on the difference between accuracy and precision.

ENOB can differ from raw bits not only due to inherent random noise, but also non-linearity. In that case, individual counts can still be useful in detecting small variations of a signal, but the absolute level of that signal is only known to the ENOB level.

ENOB can also vary over usage conditions. A/D converters degrade at high speeds. The raw bits remain the same no matter how you run the converter (within specified limits), but ENOB may be different at different frequencies, setup times, and conversion times.

A/Ds are also often specified as monotonic, even when ENOB is more than 1 bit less than the number of raw bits. That means that there will never be flipped codes, even if that were within the ENOB spec.

There is also a difference between random and systemic noise. Random noise can be reduced by filtering multiple readings, effectively giving you a slower but more accurate A/D. Systemic noise, like built-in non-linearity, will still be there no matter how clean you make the signals and how much you filter the result. Repeatable non-linearity can be address by calibration of individual units.

Both the raw number of bits and the effective number of bits over various operating points are useful in the design of the larger system.

History

0 comment threads

Sign up to answer this question »