105 Matching Annotations
  1. May 2022
    1. is the chance of death among those individuals that are older and received new treatment

      Thus: reference group is formulated by taking everything into account which is not listed in the term collumn

    1. 16Coronavirus vaccination acceptability study (CoVAccS) 2020 [updated 2020 Aug 10; cited 2020 Aug 11]. Available from: https://osf.io/94856/. [Google Scholar]

      Note: access to the full survey

  2. Apr 2022
  3. Mar 2022
    1. 3 egual.BY.smdfslv.Male 0.84*** ( 0.08 ) 0.99*** ( 0.08 )

      improved a lot

    2. cfi tli

      is much better if we use ordinal model even if high chi2

    3. fit_eg_sc_metric_p 36 113.037 27.7229 16 0.03411 *

      no scalar invariance even if we keep the constraint

    4. equal =~ dfincac","dfincac | t1"

      another change

    5. dfincac ~*~ c(1,1)*dfincac

      impose constrain to improve the model

    6. 64 64 dfincac | t1 0 2 2 40 NA 0 .p11. .p64. -1.422 -1.217 0.048

      meaning This item work differently between male and females

    7. 9 .p11. == .p64. 12.723 1 0.000

      the highest, look up what p64 is

    8. 1 score 34.435 28 0.187

      even if we freeze some of the thresholds, we will not reach an improvement of chi2 in our model no scalar invariance possible

    9. fit_eg_sc_scalar 36 121.815 39.008 16 0.001085 **

      we are unable to compare the thresholds

    10. eg_ws_mlr_cov 75.59 9 0.94 0.89 0.07 0.05 0.08 0.03 0.06

      a model with covariance included fits the data much better

    11. wc_socia ~~ 0*egual

      exclude the covariance

    12. 2 egual.BY.dfincac -0.61*** ( 0.06 ) -0.61*** ( 0.06 )

      the model is similar, but you can see that some SE are actually different!

    1. Number of observations 1758 1760

      we losed 2

    2. sbprvpv|t1 -1.221 0.040 -30.423 0.000 -1.221 -1.221 sbprvpv|t2 0.528 0.032 16.562 0.000 0.528 0.528 sbprvpv|t3 1.111 0.038 29.093 0.000 1.111 1.111 sbprvpv|t4 2.213 0.081 27.384 0.000 2.213 2.213

      tells you how the variable is distributed

    3. Thresholds:

      for each of teh variables that you have

    4. Intercepts:

      not estimated since it does not make sense

    5. Latent Variables:

      loadings are better now compared to the continuous case

    6. binary:

      Item response theory to build a model which uses only binary data

    7. SRMR 0.079 0.079

      the estimates of the factor loadings will be the same when taking non robust version the point estimate is the same while standard error and residual variances do change

    8. 90 Percent confidence interval - lower 0.231 90 Percent confidence interval - upper 0.319

      again we see robust version

    9. 90 Percent confidence interval - lower 0.248 0.155 90 Percent confidence interval - upper 0.304 0.191

      robus version is lower, which is good

    10. obust Comparative Fit Index (CFI) 0.863 Robust Tucker-Lewis Index (TLI) 0.590

      the other two are the scale versions

    11. Comparative Fit Index (CFI) 0.862 0.909 Tucker-Lewis Index (TLI) 0.586 0.726

      we improve the fit indices here

    12. 106.210

      the test statistic is lower, which is good, we are improving the model, we scale the variance-covariance matrix with a kurtosis, see how big it is underneath

    13. Scaling correction factor 2.507

      reflect the scaling factor with which the indices are corrected

    14. #

      fit statistic and standard error become more robsut

    15. estimator = "ML

      These estimators makes it possible to still run the model it can be seen as a scaling procedure by its kurtosis it panalize certain variables more than other

    16. HZ p value MVN

      here there is non normality

    17. multivariate normality test

      this is the joint distribution of all the variables included in the dataset

    18. D P.value

      Here we see that our variables are non of them are normally distributed

      You should run such test for all the varaibles which are continuous

    19. "P-value"

      You can actually test for any variable if it follows a certain distribution by changing this parameter

    20. KS test

      Is a normal test to see if there is non-normality

    21. wk

      What is this? People tend to agree that well state response only old etc don't agree on this

    22. #

      It is easy to just plot the variable to see if it is normally distributed or not, underneath we see that this is not the case

    23. Usually with survey we prefer skewed distribution

    1. tot_edu_diff -0.037 0.022 -1.700 0.089 -0.029 -0.105

      it is boarder line significant

    2. fit_mediation_mg_path_ab 46 29602 29837 142.63 0.0011676 1 0.9727

      not significantly different, since the coefficients are very similar thus the two models are not that the different even though the path is different, it is not informative

    3. Model 44 29602 29848 138.86 138.86 44 0.000000000009203 ***

      the model can be improved

    4. total_edu_g2 0.029 0.016 1.894 0.058 0.024 0.085

      this is different, yet the coefficient is different, we do not really want to take this into account

    5. ## Direct effect ## welf_supp ~ c("c_inc_1", "c_inc_2")*hinctnta welf_supp ~ c("c_age_1", "c_age_2")*agea welf_supp ~ c("c_edu_1", "c_edu_2")*eduyrs ## Mediator ## # Path A egual ~ c("a_inc_1", "a_inc_2")*hinctnta egual ~ c("a_age_1", "a_age_2")*agea egual ~ c("a_edu_1", "a_edu_2")*eduyrs # Path B welf_supp ~ c("b1", "b2")*egual ## Indirect effect (a*b) ## # G1 ab_inc_g1 := a_inc_1*b1 ab_age_g1 := a_age_1*b1 ab_edu_g1 := a_edu_1*b1 # G2 ab_inc_g2 := a_inc_2*b2 ab_age_g2 := a_age_2*b2 ab_edu_g2 := a_edu_2*b2 ## Total effect c + (a*b) ## # G1 total_inc_g1 := c_inc_1 + (a_inc_1*b1) total_age_g1 := c_age_1 + (a_age_1*b1) total_edu_g1 := c_edu_1 + (a_edu_1*b1) # G1 total_inc_g2 := c_inc_2 + (a_inc_2*b2) total_age_g2 := c_age_2 + (a_age_2*b2) total_edu_g2 := c_edu_2 + (a_edu_2*b2)

      this need to be written by hand (the equation

    6. conduct an additional test, to see if it is the case

    7. 27 27 gvcldcr ~1 0 2 2 24 NA 0 .p13. .p27. 7.431 7.286 0.049

      indicate the difference will the model improve if you let the intercept to be estimated freely

    8. 7 .p13. == .p27. 15.885 1 0.000

      this is the biggest

    9. fit_scalar_gvcldcr 10 25353 25452 42.889 19.453 3 0.0002203 ***

      we do not reach the scalar invariance

    10. welf_supp 0.014 0.066 0.207 0.836 0.012 0.012

      mean estimated the females mean is higher a bit compared to the males, yet this is not that big for this you can then use for instance a t-test to see if there is a significant difference between the groups

    11. welf_supp 0.000 0.000 0.000

      mean 0

    12. c(0,NA)*0

      to identify the model, you need to set the mean of one group to 0

    13. Scalar

      here we can compare the means across the groups

    14. 1,L

      estimated to be the same across the groups

    15. Group 2 [Female]:

      not able to do anything, not identified

    1. 70.047 0.62147 3 0.8915

      not necessary to allow the path to be different since they are similar

    2. c2

      even though the loadings are the same for the two groups but the regression can be different (other paths)

    3. model_mediation_mg <- '

      specify the path model recall it is the same as previous session

    4. S

      In OLS we just add an interaction you want direct and indirect effects are different across groups

    5. 5 Multi-group SEM

      here we have structural path is our path coefficient is different across different groups?

    6. group.partial = c(gvslvue ~~ gvslvue)

      test

    7. parTable(fit_strict)

      overview of all parameters tells you the meaning of the above the parameters in the table

    8. X2

      this is the contribution of letting it free over all chi2

    9. 0.049

      this is the boarder line

    10. total score test:

      it tells us whether allowing something to be free across the two groups (residual e.g.), will improve the chi2 of our model?

    11. e

      at least 2 indicators need to be the same across the groups afterwards we can only maybe change it

    12. ' ' 1

      the model fits very well even though there are difference

    13. we want to test if the data does not get tooo bad when we can't use it anymore

    14. *

      even though fit indices are good this means that the model, it reject the null hypothesis the closer to 0, the better see big decreases (chi2) everytime we modify our model, what does it mean technically? Does the model improve if we put constraint? it becomes more simple, we want to see good model ==> not bad

    15. Strict 7 0.99 0.99 0.04 0.01 0.06 0.77 0.03

      Very good indices, no! There should be another test, since this test is not powerful enough, look at likelihood ratio test

    16. (.p2.)

      tells lavaan fix the coefficient to be the same across the two groups

    17. "residuals"

      Variance is the same (assumption)

    18. intercepts"

      vector of 2 if we reach this type of invariance, a group is not systimatically expected to give the same response

    19. )

      the meaning might be that the interpreation of the question is different for the two groups or same response style

    20. (

      Here we want the loadings to be the same

    21. 0.685

      this is lower compared to males are they statistical significant different or are they the same?

    22. factor

      Transforming into a factor, the two covariance matrices are independent

    23. Strict Invariance

      useful when you have strong theoretical background and scales, then you want the variance or residuals to be the same in certain cases

    24. intercepts (set to equal)

      across the different groups

    25. Scalar Invariance (also called “strong” invariance)
      • when you want to compare the mean
      • research question: how do groups differ compared to the other group? (same as t-test)
    26. equal

      in each group

    27. Measurament Equivalenc
      • processing of testing, if latent construct is understand the same way across the groups can be applied to different grouping shema Intense if a lot of group, since you need to understand each loading for each group (can be a lot of work)
    1. 0.066

      amount of variance explained in the model

    2. label

      these are the regression paths this is a way to extract the code and put it into a table

    3. 0.047

      this is better compared to the previous model, since we identified more mediation paths, note if you add mediation paths this needs to be justified by theory

    4. welf_supp ~ b*egual

      only 1 path, since we are estimathing one prediction

    5. rmsea

      how good is your prediction model? you can also use R^2

    6. fi cfi 0.98 0.94 0.91 0.90 0.88 tli t

      only look for measurement model not for regression path

    7. Fit Measurement plus_gender plus_age plus_income plus_education

      as more covariates are introduces the model get worse, estpecially cfi and tli decrease a lot why for cfi and tli? you get bigger variance and covariance matrix, thus generating more error, since more variables are getting correlated (see extra paper note)

    8. easuremen

      good, even though negative factor loading, your fit is good, thus fit model does not tell you anyting about the loadings

    9. 0.404

      this is wrong, delete

    1. -0.051 total -0.019 0.015 -1.227 0.220 -0.015 -0.035

      the text underneath is related to this section

    2. -0.035

      indirect effect is significant and negative which pass through mediation taking it alone is not significant

    3. -0.263

      1 point increase in your elgolatorism (1sd) you have 0.20 decrease on wellfort support

    4. egual ~ hinctnta (a) 0.057 0.010 5.853 0.000 0.083 0.196 welf_supp ~ egual (b) -0.488 0.074 -6.556 0.000 -0.263 -0.263

      here this is interesting from positive to negegative

    5. (c)

      income has no effect on well fort

    6. -0.396

      terrible latent factor, either you need to reverse or this indicator does not work thus kick it out, thus before you should have seen it and deleted it

    7. :=

      again calculating a new paramer

    8. :=

      stands for new parameter ab is just a name it can be called anything else it tells you the coefficients, multiply it together and give the result in the label specified

    9. c

      this is a label to indicate how the regression path is called we do exactly the same for path a and b

    10. the amount of mediati

      how much the mediation explain the effect from x to y

    11. Mediation

      You are not required to use latent factors to do such analysis, you can also use manifest variables you can even fit multilevel mediation models It is important to have theory behind such analysis

    12. gndr 0.013 0.066 0.197 0.844 0.011 0.005 e

      yet these are not significant, so not reporting in the paer

    13. 0.010

      1sd in education of our respondent correspond on average 0.010 factor

    14. 0.011

      look at this when you are working with dummy variables not standardized

    15. -1.999814

      dummy