4
Quelles sont les valeurs correctes pour la précision et le rappel dans les cas de bord?
La précision est définie comme: p = true positives / (true positives + false positives) Est - il exact que, true positiveset false positivesapproche 0, la précision approche 1? Même question pour rappel: r = true positives / (true positives + false negatives) J'implémente actuellement un test statistique où j'ai …
20
precision-recall
data-visualization
logarithm
references
r
networks
data-visualization
standard-deviation
probability
binomial
negative-binomial
r
categorical-data
aggregation
plyr
survival
python
regression
r
t-test
bayesian
logistic
data-transformation
confidence-interval
t-test
interpretation
distributions
data-visualization
pca
genetics
r
finance
maximum
probability
standard-deviation
probability
r
information-theory
references
computational-statistics
computing
references
engineering-statistics
t-test
hypothesis-testing
independence
definition
r
censoring
negative-binomial
poisson-distribution
variance
mixed-model
correlation
intraclass-correlation
aggregation
interpretation
effect-size
hypothesis-testing
goodness-of-fit
normality-assumption
small-sample
distributions
regression
normality-assumption
t-test
anova
confidence-interval
z-statistic
finance
hypothesis-testing
mean
model-selection
information-geometry
bayesian
frequentist
terminology
type-i-and-ii-errors
cross-validation
smoothing
splines
data-transformation
normality-assumption
variance-stabilizing
r
spss
stata
python
correlation
logistic
logit
link-function
regression
predictor
pca
factor-analysis
r
bayesian
maximum-likelihood
mcmc
conditional-probability
statistical-significance
chi-squared
proportion
estimation
error
shrinkage
application
steins-phenomenon