--- title: "Week 14 lab: Meta-analysis workflow" subtitle: "Hedges' g, pooled models, forest plots, and bias checks" format: html: toc: true embed-resources: true execute: echo: true warning: false message: false --- ## Goal By the end of this lab you will be able to: 1. compute standardized mean differences with `metafor::escalc()`, 2. fit and compare fixed-effect and random-effects meta-analysis models with `metafor::rma()`, 3. create and interpret a forest plot for a study-level literature, and 4. use a funnel plot, `regtest()`, and `trimfill()` as publication-bias checks. ## Reporting reminder As in the earlier labs, each exercise should leave behind two things: code that reproduces the analysis, and a short write-up in your own words. The point is not to dump every printed line into the answer. The point is to turn the model output and plots into a short statistical judgment. ## 0) Setup Put this file next to the worksheet: - `week14_lab_action_game_attention_meta.csv` Load packages: ```{r} library(tidyverse) library(metafor) ``` This week we will work with a synthetic literature on *action video game training*. In each study, one group completes a short period of action-game training, while the comparison group either plays a non-action game or completes some other low-action control task. The outcome is a visual-attention performance score, so higher values indicate better selective or spatial attention. ## 1) Exercise 1 Start by loading the study-level dataset. Each row is one study, with treatment and control means, standard deviations, and sample sizes. ```{r} meta_lit <- read.csv("week14_lab_action_game_attention_meta.csv") glimpse(meta_lit) summary(meta_lit) ``` Now put the studies on a common effect-size scale. Use `metafor::escalc()` with `measure = "SMD"` to compute the standardized mean difference and its sampling variance for each study. Store the result as `dat_g`, then inspect a few rows containing the study label, year, effect size, and sampling variance. The function `escalc()` is `metafor`'s standard effect-size constructor. With `measure = "SMD"`, it computes a standardized mean difference from the study means, SDs, and sample sizes. By default, the effect-size estimate is stored in `yi` and its sampling variance in `vi`, which are the names that `metafor` uses in later modeling functions. ```{r} #| lab: student #| include: true dat_g <- NULL ``` Questions: - In this context, what does a positive `yi` mean, and what would a negative `yi` mean? - What does `vi` represent, and why does a meta-analysis need it? *Put your answer here* ## 2) Exercise 2 Now that the studies are on a common scale, the next question is how to pool them. Fit a fixed-effect model called `res_fe` and a random-effects model called `res_re`. Print both model objects so you can inspect the canonical output directly. The function `rma()` is the main meta-analysis model fitter in `metafor`. Here `method = "FE"` fits a fixed-effect model, while `method = "REML"` fits a random-effects model and estimates the between-study variance from the data. ```{r} #| lab: student #| include: true res_fe <- NULL res_re <- NULL ``` Questions: - Which pooled model would you report here, and why? - Do the heterogeneity results suggest that the studies are all estimating exactly the same effect? - What do I² and `tau^2` mean in plain language? *Put your answer here* ## 3) Exercise 3 Once the pooled model is fitted, it helps to look at the whole literature at once. Create a forest plot for the random-effects model. Label the studies with their names and years, and label the x-axis as standardized mean difference. The function `forest()` draws the study estimates and their confidence intervals together with the pooled estimate. The `slab` argument controls the study labels printed on the left. ```{r} #| lab: student #| include: true ``` Questions: - What is the overall substantive conclusion from the forest plot? - Identify one study that looks atypical and explain what makes it stand out. - If you relied only on "how many studies look clearly nonzero," what information would you lose compared with the meta-analytic model? *Put your answer here* ## 4) Exercise 4 The final step is not to prove publication bias, but to check whether the pattern of results makes small-study effects plausible. Start by making a funnel plot for `res_re`, then run an Egger-style regression test with `regtest(res_re, model = "rma")`. The function `funnel()` makes the standard funnel plot for a meta-analysis model: more precise studies should cluster near the pooled estimate, while less precise studies spread out more widely. The function `regtest()` formalizes that visual check by testing whether effect sizes are systematically related to study precision. ```{r} #| lab: student #| include: true egger <- NULL ``` Now continue with a trim-and-fill sensitivity analysis. Run `trimfill(res_re)`, inspect the adjusted model object, and then make a funnel plot for the adjusted result. The function `trimfill()` tries to estimate how the pooled result would change if the funnel asymmetry were due to missing studies on one side. It is best treated as a sensitivity analysis: it creates a hypothetical bias-corrected summary rather than proving that those missing studies really exist. ```{r} #| lab: student #| include: true tf <- NULL ``` Questions: - Does the funnel look reasonably symmetric, or does it suggest missing studies on one side? - What is the null hypothesis of the regression test in words? - How much does the trim-and-fill estimate differ from the original random-effects estimate? - Why should you treat these steps as sensitivity analysis rather than definitive proof of publication bias? *Put your answer here* ## 5) Checklist - Make sure the effect sizes are coded so positive values mean the same substantive direction in every study. - Look at both the pooled estimate and the heterogeneity statistics before deciding how to summarize the literature. - Use the forest plot to connect the pooled result back to the individual studies. - Treat funnel plots, regression tests, and trim-and-fill as bias checks, not automatic verdicts.