Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2019, Journal of Inequalities and Applications
…
14 pages
1 file
In this paper, we present some interesting results related to the bounds of the Shannon and the Zipf-Mandelbrot entropies. Further, we define linear functionals and present their properties. We also construct new family of exponentially convex functions and Cauchy-type means.
The main purpose of this paper is to find new estimations for the Shannon and Zipf–Mandelbrot entropies. We apply some refinements of the Jensen inequality to obtain different bounds for these entropies. Initially, we use a precise convex function in the refinement of the Jensen inequality and then tamper the weight and domain of the function to obtain general bounds for the Shannon entropy (SE). As particular cases of these general bounds, we derive some bounds for the Shannon entropy (SE) which are, in fact, the applications of some other well-known refinements of the Jensen inequality. Finally, we derive different estimations for the Zipf–Mandelbrot entropy (ZME) by using the new bounds of the Shannon entropy for the Zipf–Mandelbrot law (ZML). We also discuss particular cases and the bounds related to two different parametrics of the Zipf–Mandelbrot entropy. At the end of the paper we give some applications in linguistics.
Results in Mathematics, 2019
This is a continuation of the author's paper "Convexity properties of some entropies", published in Raşa (Results Math 73:105, 2018). We consider the sum Fn(x) of the squared fundamental Bernstein polynomials of degree n, in relation with Rényi entropy and Tsallis entropy for the binomial distribution with parameters n and x. Several functional equations and inequalities for these functions are presented. In particular, we give a new and simpler proof of a conjecture asserting that Fn is logarithmically convex. New combinatorial identities are obtained as a byproduct. Rényi entropies and Tsallis entropies for more general families of probability distributions are considered. The paper ends with three new conjectures.
Results in Mathematics, 2018
We consider a family of probability distributions depending on a real parameter x, and study the logarithmic convexity of the sum of the squared probabilities. Applications concerning bounds and concavity properties of Rényi and Tsallis entropies are given. Finally, some extensions and an open problem are presented.
Carpathian Journal of Mathematics, 2018
We consider a family of probability distributions depending on a real parameter and including the binomial, Poisson and negative binomial distributions. The corresponding index of coincidence satisfies a Heun differential equation and is a logarithmically convex function. Combining these facts we get bounds for the index of coincidence, and consequently for Renyi and Tsallis entropies of order 2.
Pacific Journal of Mathematics, 1977
Let A n = {P£R n : P = (pι,p 2 ,-',Pn), where ΣΓ=,p, = 1 and p t > 0 for i = 1,2, , n) and let B n = {P G Λ n : p 2 ^ p 2 • ^ p n }. We show that the inequality (i) for all P, Q G B n and some integer n ^ 3, implies that f(p) = c log p + d, where c is an arbitrary nonnegative number and d is an arbitrary real number. We show, furthermore, that if we restrict the domain of the inequality (1) to those P,QGB n for which P > O (Hardy-Littlewood-Pόlya order), then any function that is convex and increasing satisfies (1).
Periodica Mathematica Hungarica, 2016
It is well-known that the Shannon entropies of some parameterized probability distributions are concave functions with respect to the parameter. In this paper we consider a family of such distributions (including the binomial, Poisson, and negative binomial distributions) and investigate the Shannon, Rényi, and Tsallis entropies of them with respect to the complete monotonicity.
Demonstratio Mathematica
To procure inequalities for divergences between probability distributions, Jensen’s inequality is the key to success. Shannon, Relative and Zipf-Mandelbrot entropies have many applications in many applied sciences, such as, in information theory, biology and economics, etc. We consider discrete and continuous cyclic refinements of Jensen’s inequality and extend them from convex function to higher order convex function by means of different new Green functions by employing Hermite interpolating polynomial whose error term is approximated by Peano’s kernal. As an application of our obtained results, we give new bounds for Shannon, Relative and Zipf-Mandelbrot entropies.
Communications in Information and Systems, 2002
In this paper we prove a countable set of non-Shannon-type linear information inequalities for entropies of discrete random variables, i.e., information inequalities which cannot be reduced to the "basic" inequality I(X : Y |Z) ≥ 0. Our results generalize the inequalities of Z. Zhang and R. Yeung (1998) who found the first examples of non-Shannon-type information inequalities.
arXiv: Classical Analysis and ODEs, 2015
It is well-known that the Shannon entropies of some parameterized probability distributions are concave functions with respect to the parameter. In this paper we consider a family of such distributions (including the binomial, Poisson, and negative binomial distributions) and investigate the concavity of the Shannon, R\'enyi, and Tsallis entropies of them.
CMS Books in Mathematics, 2006
Prob (X) : set of Borel probability measures on X δ a : Dirac measure concentrated at a A(s, t), G(s, t), H(s, t) : arithmetic, geometric and harmonic means I(s, t) : identric mean L(s, t) : logarithmic mean M p (s, t), M p (f ; µ) : Hölder (power) mean M [ϕ] : quasi-arithmetic mean : end of a proof M (αs, αt) = αM (s, t) for all α > 0 and all s, t ∈ I. Several examples of strict, symmetric and homogeneous means of strictly positive variables are listed below. They are all continuous (that is, continuous in both arguments). Hölder's means (also called power means): M p (s, t) = ((s p + t p)/2) 1/p , for p = 0 G(s, t) = M 0 (s, t) = lim p→0 M p (s, t) = √ st, to which we can add M −∞ (s, t) = inf{s, t} and M ∞ (s, t) = sup{s, t}. Then A = M 1 is the arithmetic mean and G is the geometric mean. The mean M −1 is known as the harmonic mean (and it is usually denoted as H). Lehmer's means: L p (s, t) = (s p + t p)/(s p−1 + t p−1). for every x, y ∈ I. In the context of continuity (which appears to be the only one of real interest), midpoint convexity means convexity, that is, f ((1 − λ)x + λy) ≤ (1 − λ)f (x) + λf (y) (C) for every x, y ∈ I and every λ ∈ [0, 1]. See Theorem 1.1.4 for details. By mathematical induction we can extend the inequality (C) to the convex combinations of finitely many points in I and next to random variables associated to arbitrary probability spaces. These extensions are known as the discrete Jensen inequality and respectively the integral Jensen inequality. It turns out that similar results work when the arithmetic mean is replaced by any other mean with nice properties. For example, this is the case of regular means. A mean M : I × I → R is called regular if it is homogeneous, symmetric, continuous and also increasing in each variable (when the other is fixed). Notice that the Hölder means and the Stolarsky means are regular. The Lehmer's mean L 2 is not increasing (and thus it is not regular). The regular means can be extended from pairs of real numbers to random variables associated to probability spaces through a process providing a nonlinear theory of integration. Consider first the case of a discrete probability field (X, Σ, µ) , where X = {1, 2}, Σ = P ({1, 2}) and µ : P ({1, 2}) → [0, 1] is the probability measure such that µ({i}) = λ i for i = 1, 2. A random variable associated to this space which takes values in I is any function M (x 1 , x 2 ; 1, 0) = x 1 M (x 1 , x 2 ; 0, 1) = x 2 M (x 1 , x 2 ; 1/2, 1/2) = M (x 1 , x 2) and for the other dyadic values of λ 1 and λ 2 we put M (x 1 , x 2 ; 3/4, 1/4) = M (M (x 1 , x 2), x 1) M (x 1 , x 2 ; 1/4, 3/4) = M (M (x 1 , x 2), x 2) and so on. In the general case, every λ 1 ∈ [0, 1), has a unique dyadic representation λ 1 = ∞ k=1 d k /2 k (where d 1 , d 2 , d 3 , ... is a sequence consisting of 0 and 1, which is not eventually 1) and we put M (x 1 , x 2 ; λ 1 , λ 2) = lim n→∞ M x 1 , x 2 ; n k=1 d k /2 k , 1 − n k=1 d k /2 k .
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Proceedings of the American Mathematical Society, 2002
Journal of Inequalities and Applications
Arab Journal of Mathematical Sciences, 2018
Proceedings of Computational Complexity. Twelfth Annual IEEE Conference, 1997
Arabian Journal of Mathematics, 2020
Nonlinear Analysis: Theory, Methods & Applications, 2012
Mathematics
Rocky Mountain Journal of Mathematics
arXiv (Cornell University), 2018
Mathematical Inequalities & Applications, 2011
Journal of Mathematical iInequalities, 2020
Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg, 1993
IEEE Transactions on Information Theory, 2021
Journal of Inequalities and Applications, 2018
Mathematical Analysis and Applications, 2018
Journal of Inequalities and Applications, 2019
Journal of Interdisciplinary Mathematics, 2019