Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

How are the assumptions behind two ways of deriving the Rayleigh-Jeans law related?

+4
−0

The Rayleigh-Jeans law does a good job of describing the spectral radiance of a black body at low frequencies: $$B_{\nu}(T)=\frac{2kT\nu^2}{c^2}$$ with $T$ the temperature and $\nu$ the frequency. There are a couple of ways to derive it. One, requiring no explicit assumptions about the energy range of the photons (and presented rather nicely in Section 2.4 of Elements of Radio Astronomy), involves analyzing standing wave modes in a cubical cavity of side length $a$, subject to appropriate boundary conditions. You can show easily that $$B_{\nu}(T)=\frac{c}{4\pi}\frac{N_{\nu}(\nu)\langle E\rangle}{a^3}$$ with $N_{\nu}(\nu)$ the number of modes per unit frequency at a frequency $\nu$ and $\langle E\rangle$ the average density of a mode. Assuming that the mode energies follow a continuous Boltzmann distribution (the classical approach), you find that $\langle E\rangle=kT$; substituting in the proper expression for $N_{\nu}(\nu)$ derived by studying the modes leads you to the Rayleigh-Jeans law. Of course, if you assumed that the mode energies are quantized (the quantum approach), you would end up with the full Planck function: $$B_{\nu}(T)=\frac{2h\nu^3}{c^2}\frac{1}{e^{h\nu/kT}-1}$$

The second approach is to start from the Planck function (derived via your method of choice) and take the low-frequency limit, where $h\nu\ll kT$, so $$e^{h\nu/kT}\approx1+\frac{h\nu}{kT}$$ which, when substituted into the Planck function, yields the Rayleigh-Jeans law.

In short, two different assumptions lead us to the same result: one is that photon energies aren't quantized; the other is that we're working at low frequencies. They seem a bit different on the surface, but I assume they're related. Are they, and if so, how? My guess is that it boils down to the fact that at low frequencies, adjacent modes of energies $E=nh\nu$ and $E=(n+1)h\nu$ are only separated by a small amount of energy (as $h\nu$ is smaller than at high frequencies), so differences between energy levels are small and the spectrum of energies can be approximated as being continuous - but I'm not sure. I might be overthinking this a bit.

History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

0 comment threads

1 answer

+2
−0

One way this can be explained is from the perspective of numerically approximating an integral. From this perspective, the concordance of "continuous" and "low frequency" has to do with the low frequency regime being where the approximation of the "continuous" integral is valid/best.

If you wanted to numerically compute an approximation to $\int_0^b f(x)dx$ you could use Riemann sums producing $$\int_0^b f(x)dx \approx \Delta x\sum_{n=0}^{\lfloor b/\Delta x\rfloor}f(n\Delta x)$$ The quality of this approximation will depend on the step size, $\Delta x$. The smaller the better.

The integral we care about is improper. Ignoring any analytic subtleties I'll just say that we'll have $$\int_0^\infty f(x) dx \approx \Delta x\sum_{n=0}^\infty f(n\Delta x)$$ assuming $f$ is sufficiently nice and $\Delta x$ sufficiently small.

The specific integrals, referring to section 2.4 of Elements of Radio Astronomy which you reference, are the integrals in the average energy, $$\langle E \rangle = \frac{\int_0^\infty EP(E)dE}{\int_0^\infty P(E)dE} \quad \text{where} \quad P(E)\propto e^{-\frac{E}{kT}}$$

Approximating each of the integrals independently produces $$\begin{align} \int_0^\infty EP(E)dE & \approx (\Delta E)^2\sum_{n=0}^\infty n(e^{-\frac{\Delta E}{kT}})^n \\ & = (\Delta E)^2 e^{-\frac{\Delta E}{kT}} / (1- e^{-\frac{\Delta E}{kT}})^2 \end{align}$$ and $$\begin{align} \int_0^\infty P(E)dE & \approx \Delta E\sum_{n=0}^\infty (e^{-\frac{\Delta E}{kT}})^n \\ & = \Delta E / (1 - e^{-\frac{\Delta E}{kT}}) \end{align}$$ with the resulting ratio being $$\langle E \rangle \approx \frac{\Delta E }{e^{\frac{\Delta E}{kT}} - 1}$$

We're going to choose $\Delta E = h\nu$ and the quality and thus validity of this approximation will improve as $\nu$ shrinks. Of course, the physics tells us that the approximation goes the other way, i.e. the integral is a continuum approximation of the sum. Regardless of the which expression we consider the approximation, the point is that the expressions are only approximately equivalent for sufficiently small step sizes, i.e. sufficiently small values of $\nu$. Many numerical methods have constraints on how big the step size can be before the method becomes unstable at which point the bounds on errors no longer apply. It is likely the ultraviolet catastrophe can be viewed as this numerical method becoming unstable.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

Sign up to answer this question »