现在我这样做,想知道是否有更好的方法。
import numpy as np
from scipy import integrate
from sklearn.mixture import GaussianMixture as GMM
model = GMM(n, covariance_type = "full").fit(X)
def cdf(x):
return integrate.quad(lambda t: np.exp(model.score(t)), -inf, x)[0]
现在我这样做,想知道是否有更好的方法。
import numpy as np
from scipy import integrate
from sklearn.mixture import GaussianMixture as GMM
model = GMM(n, covariance_type = "full").fit(X)
def cdf(x):
return integrate.quad(lambda t: np.exp(model.score(t)), -inf, x)[0]
混合高斯分布的累积分布函数(CDF),其中F_1,F_2,F_3...的CDF和ω_1,ω_2,ω_3...的权重相等,即F_mixed = ω_1 * F_1 + ω_2 * F_2 + ω_3 * F_3 + ... 因此,最初的回答是:
from scipy.stats import norm
weights = [0.163, 0.131, 0.486, 0.112, 0.107]
means = [45.279, 55.969, 49.315, 53.846, 61.953]
covars = [0.047, 1.189, 3.632, 0.040, 0.198]
def mix_norm_cdf(x, weights, means, covars):
mcdf = 0.0
for i in range(len(weights)):
mcdf += weights[i] * norm.cdf(x, loc=means[i], scale=covars[i])
return mcdf
print(mix_norm_cdf(50, weights, means, covars))
输出
0.442351546658755
scale
参数需要是标准差而不是(协)方差! 对平方根进行修改对我有用。 - Sealander