If an investigator is interested in employing this statistical approach, they may talk to the assigned statistician for the appropriate way to do this.
Splines capture non-linearity between variables, something that is lost when data is binned/categorized.
Using restricted cubic splines is a way to keep continuous variables continuous. They work by allowing for a more true representation of the relationship between the continuous predictor and the outcome.
Most cases, it is better to keep continuous variables continuous, there are some exceptions however.
Michael LaValley, PhD Professor, Department of Biostatistics, BU School of Public Health
Findings from IDA can provide refinements to the analysis plan and guide the later regression analyses.
IDA should focus on a) missing data, b) univariate distributions, and c) multivariate associations among the predictors.
IDA is not exploratory data analysis and should not probe the associations between predictors and the outcome.
Initial Data Analysis (IDA) is intended to provide reliable information about the data before starting regression analyses to address the primary research question.
Measurement error can also be called information bias.
Misclassification refers to binary and categorical variables.
Misclassification can be differential vs nondifferential. Example of differential misclassification: when disease status changes how people report exposure (recall bias).
Misclassification can be dependent vs independent.
We learn that nondifferential misclassification biases estimates towards the null.
It’s cited in Discussion (limitation) sections of papers all the time. But it’s not always true.
Overall we expect to have nondifferential misclassification.
Actual/observed estimate from a single study may not be biased towards the null, due to chance. It’s worse with smaller sample sizes.
Misclassifying some levels of a categorical exposure can lead to bias away from the null.
Collapsing nondifferentially misclassified exposure categories can lead to unpredictable bias.
Dependent nondifferential misclassification of the exposure or outcome can lead to unpredictable bias, even with small amounts of misclassification. Example: when the same instrument is used to measure both exposure & outcome & people might tend to respond the same way.
Nondifferential outcome misclassification: if perfect specificity but low sensitivity (like with a fracture outcome), expect the relative risk to be unbiased. There is a calculation for the bias expected for the risk difference (related to the sensitivity).
TARMOS is a structured and practical framework designed to help researchers transparently analyze incomplete observational data, ensuring a systematic approach that aligns with the nature of the data and research objectives.
Missing data is prevalent in medical research, especially in observational studies, and can arise from participant nonresponse, measurement error, or study design limitations.
Ignoring or improperly handling missing data can bias results, reduce statistical power, and lead to misleading conclusions that may not be representative of your study population.
Understanding the three missing data mechanisms—Missing Completely at Random (MCAR), Missing at Random (MAR), and Missing Not at Random (MNAR)—is crucial, as they determine the appropriate handling method, such as complete case analysis, single imputation, or multiple imputation.
Causal diagrams can help illustrate the relationship between missingness and observed variables, while incorporating auxiliary variables (predictors of missing values not included in the main model) in multiple imputation can reduce bias and improve efficiency.
4/2/2025
Introduction to Adaptive Platform Trial –
Paper 1,
Paper 2
One Protocol, Multiple Arms: APTs allow multiple treatments to be tested simultaneously under a single, flexible master protocol in order to increase efficiency and coordination.
Built to Adapt: Treatments can be added or dropped mid-trial based on prespecified interim analyses, allowing real-time responsiveness to emerging data.
Shared Control: APTs often use a shared control group across arms, reducing sample size and participant exposure to placebo or standard care.
Power in Simulation: Extensive pre-trial simulations are used to validate design performance and ensure statistical rigor, especially for controlling Type I error.
Proven in Crisis: APTs were successfully deployed in COVID-19 (e.g., TOGETHER, RECOVERY), demonstrating their value in fast-moving clinical landscapes.