Sorry, you need to enable JavaScript to visit this website.

osse

Office of the State Superintendent of Education
 

DC Agency Top Menu

-A +A
Bookmark and Share

Technical Quality of Statewide Assessments

Technical Reports

The annual technical report for each assessment provides information about the technical qualities of the assessment administration, including test design and development, scoring and reporting of results, and analyses conducted to provide evidence of the assessment’s reliability and the validity of inferences made about assessment results.

To request a copy of a technical report for a DC statewide assessment, please contact [email protected].

Technical Advisory Committee

The District of Columbia Technical Advisory Committee (TAC) provides technical consult and guidance on issues related to the District’s statewide assessment system and its accountability system. The TAC is comprised of leading national experts in psychometrics, test design, accountability, and security. TAC members provide advice to OSSE related to the validity and reliability, accessibility, and security of its statewide Next Generation Assessment System, interpretation and use of assessment results, and its research agenda to support the technical quality of statewide assessments. 

The current members of the DC TAC are:

Dr. Gregory Cizek

Dr. Cizek is Guy B. Phillips Distinguished Professor of Educational Measurement and Evaluation at the University of North Carolina-Chapel Hill. His scholarly interests include validity, standard setting, test security, and testing policy. He currently serves as a member of the National Assessment Governing Board which oversees the National Assessment of Educational Progress (NAEP). His has held leadership positions in the American Educational Research Association (AERA) and is past President of the National Council on Measurement in Education (NCME). Dr. Cizek has worked on test development for a statewide testing program, for national licensure and certification programs, and served as an elected member of a local board of education. He began his career as an elementary school teacher.

He is editor of the Handbook of Educational Policy (1999) and Setting Performance Standards (2001, 2012); co-editor of the Handbook of Formative Assessment (2010) and Handbook of Formative Assessment in the Disciplines (2019), and author of Filling in the Blanks (1999), Cheating on Tests: How to Do It, Detect It, and Prevent It (1999), Detecting and Preventing Classroom Cheating (2003), Addressing Test Anxiety in a High-Stakes Environment (with S. Burg, 2005), Standard Setting: A Practitioner’s Guide (with M. Bunch, 2007), and Validity: An Integrated Framework for Test Score Meaning and Use (2020).

Dr. Brian Gong

Dr. Gong is a Senior Associate at the 501(c)3 non-profit Center for Assessment. Brian has enjoyed providing technical assistance to create more valid, useful, and individualized assessment and accountability systems to states and other organizations for over three decades, melding innovation with federal and state policy requirements. Brian’s professional contributions include regularly presenting at national conferences, consulting with the U. S. Department of Education (e.g., accountability system review, Growth models), and serving on the committee responsible for the 2014 revision of the Standards for Educational and Psychological Testing. After receiving a Ph.D. from Stanford University, Brian worked at Educational Testing Service and the Kentucky Department of Education.

Dr. Kyndra V. Middleton

Dr. Middleton is a Professor of Educational Psychology and Director of Graduate Studies for the School of Education at Howard University where she teaches statistics and measurement courses, serves as the primary methodologist for the school, and sits on numerous university-wide committees. She is also a Board member of the National Council on Measurement in Education and on the Executive Council of the National Alliance for Doctoral Studies in the Mathematical Sciences. Her research focuses on ensuring validity, equity, and fairness for all students as well as increasing the number of underrepresented minorities in the mathematical sciences.

Dr. James W. Pellegrino

Dr. Pellegrino is Distinguished Professor of Psychology and Education and a founding Co-director of the Learning Sciences Research Institute at the University of Illinois Chicago. His R&D interests focus on children's and adult's thinking and learning and the implications for assessment and instructional practice. He has chaired several National Academy of Sciences study Committees, including the Foundations of Assessment, Learning Research and Educational Practice, Defining Deeper Learning and 21st Century Skills, and Developing Assessments of Science Proficiency in K-12. He is a member of the National Academy of Education and the American Academy of Arts and Sciences.

Dr. Marianne Perie

Dr. Perie is the director of the Center for Assessment and Accountability Research and Design at the University of Kansas. She has expertise in setting performance standards and has provided technical advice to several states and consortia. She also has taught courses and written extensively on standard setting and has considerable expertise in validity evaluation. Dr. Perie has been working to develop strong validity arguments for alternate assessments and has provided several technical assistance workshops to states through the U.S. Department of Education.

Peer Review

Statewide assessments in the District of Columbia are created with input from many different stakeholders, including current and former educators, local policymakers, experts in educational standards and assessment, and assessment vendor partners. The combined expertise of these groups of stakeholders has resulted in a statewide system of assessments that is closely aligned to what students in DC are expected to know and do, and that yields accurate information about students’ progress and achievement relative to DC’s educational standards.

In addition to reviews conducted by OSSE and local educators, each statewide assessment must undergo review by the U.S. Department of Education to ensure that a state’s assessment system meets a set of established criteria. The assessments are reviewed by a panel of experts in the field of educational standards and assessments. Based on how well they meet the criteria, the assessments are assigned one of four possible designations:

  • Fully meets requirements
  • Substantially meets requirements
  • Partially meets requirements
  • Does not meet the requirements

DC’s PARCC and MSAA assessments both received designations of “fully meets requirements.” The DC Science, DLM, and ACCESS assessments will be submitted for peer review consideration in late 2021. For more information about the U.S. Department of Education Peer Review process, visit this page.