Stolzfus, Active Learning

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

ARTICLE

Does the Room Matter? Active Learning in


Traditional and Enhanced Lecture Spaces
Jon R. Stoltzfus†* and Julie Libarkin‡
Biochemistry and Molecular Biology and ‡Geocognition Research Lab, Michigan State University,

East Lansing, MI 48824-1319

ABSTRACT
SCALE-UP–type classrooms, originating with the Student-Centered Active Learning En-
vironment with Upside-down Pedagogies project, are designed to facilitate active learn-
ing by maximizing opportunities for interactions between students and embedding tech-
nology in the classroom. Positive impacts when active learning replaces lecture are well
documented, both in traditional lecture halls and SCALE-UP–type classrooms. However,
few studies have carefully analyzed student outcomes when comparable active learning–
based instruction takes place in a traditional lecture hall and a SCALE-UP–type classroom.
Using a quasi-experimental design, we compared student perceptions and performance
between sections of a nonmajors biology course, one taught in a traditional lecture hall and
one taught in a SCALE-UP–type classroom. Instruction in both sections followed a flipped
model that relied heavily on cooperative learning and was as identical as possible given
the infrastructure differences between classrooms. Results showed that students in both
sections thought that SCALE-UP infrastructure would enhance performance. However,
measures of actual student performance showed no difference between the two sections.
We conclude that, while SCALE-UP–type classrooms may facilitate implementation of ac-
tive learning, it is the active learning and not the SCALE-UP infrastructure that enhances
student performance. As a consequence, we suggest that institutions can modify existing
classrooms to enhance student engagement without incorporating expensive technology.

INTRODUCTION
Incorporation of more active learning in instruction has become a major goal of efforts
to improve science, technology, engineering, and mathematics (STEM) education at
the undergraduate level. Active learning is based on constructivist theory—the idea
that students must create their own knowledge in order for learning to persist (Dori
and Belcher, 2005). One core feature of active learning in the classroom is a decrease
in lecturing during which students passively listen and an increase in outcome-related
Michèle Shuster, Monitoring Editor
activities in which students actively develop their own understanding (Andrews et al., Submitted March 15, 2016; Revised August 16,
2011). The use of writing-to-learn (Reynolds et al., 2012), drawing-to-learn (Quillin 2016; Accepted August 17, 2016
and Thomas, 2015), and talking-to-learn (Tanner, 2009) can all facilitate students’ CBE Life Sci Educ December 1, 2016 15:ar68
construction of their own knowledge and can be used to promote active learning. When DOI:10.1187/cbe.16-03-0126
implemented in the classroom to replace lecture, active learning typically involves both *Address correspondence to: Jon R. Stoltzfus
student–student and student–instructor interactions (Andrews et al., 2011). Because of (stoltzfu@msu.edu).
these student–student interactions, active learning is frequently linked to peer instruc- © 2016 J. R. Stoltzfus and J. Libarkin. CBE—Life
Sciences Education © 2016 The American Society
tion and cooperative learning. Peer instruction frequently puts students together in
for Cell Biology. This article is distributed by The
informal ways that promote discussion of questions during class (Crouch and Mazur, American Society for Cell Biology under license
2001). Cooperative learning typically puts students together in more formal situations from the author(s). It is available to the public
in which they must work together to promote each other’s success; cooperative learn- under an Attribution–Noncommercial–Share
ing also usually involves some form of peer instruction. Successful implementation of Alike 3.0 Unported Creative Commons License
(http://creativecommons.org/licenses/
cooperative learning requires careful instructional design to ensure positive interde- by-nc-sa/3.0).
pendence among group members, including promoting one another’s success, being “ASCB®” and “The American Society for Cell
held accountable both as individuals and as a team, using appropriate interpersonal Biology®” are registered trademarks of The
skills, and spending time evaluating group function (Johnson et al., 1991). American Society for Cell Biology.

CBE—Life Sciences Education • 15:ar68, 1–10, Winter 2016 15:ar68, 1


J. R. Stoltzfus and J. Libarkin

The impact of well-designed active learning is so well docu- (Brooks and Solheim, 2014; Straumsheim, 2014). Taken
mented that a recent meta-analysis of studies comparing care- together, these studies suggest that carefully designed instruc-
fully designed active learning with traditional lecture concluded tion is a necessary prerequisite to learning, regardless of the
that the positive impacts of active learning over lecture are so amount of active learning implemented or the room type used.
well established that future research should focus on the rela- The mixed results from prior studies suggest a need to fur-
tive efficacy of different active-learning approaches rather than ther dissect how interactions between active learning, flipped
comparing active learning with lecture (Freeman et al., 2014). instruction, and instructional spaces facilitate increased student
Many aspects of instructional design will influence the impact learning gains and performance on course assessments. The
of active learning on student outcomes. Instructional approaches goal of the current research is to determine how students’ per-
such as peer instruction, cooperative learning, and flipped ceptions and performance are impacted by equivalent active
instruction are common methods used to facilitate active learn- learning–based instruction in SCALE-UP–type and traditional
ing (Johnson et al., 1991; Crouch and Mazur, 2001; Smith lecture-type classroom spaces. This research has many similari-
et al., 2009; Strayer, 2012). All three techniques depend on ties to previous studies (Brooks, 2011; Cotner et al., 2013) but
students working together within the classroom, and these includes a measure of pre- and postcourse content knowledge
techniques are often used in tandem; in addition, the applica- using a validated assessment instrument and incorporated
tion of flipped instruction, wherein students learn concepts and instruction specifically designed around a flipped classroom
vocabulary before attending class, is typically thought to free up model.
class time for engagement in active learning. Flipped instruc-
tion has been shown to improve student performance, atten- MATERIALS AND METHODS
dance, satisfaction, metacognition, and cooperative learning Ethics Statement
strategies (Stockwell et al., 2015; van Vliet et al., 2015; Hibbard The institutional review board of our university’s Human
et al., 2016). Research Protection Program granted permission for this
Likewise, classroom space and infrastructure design impact research, and all participants were given the option to opt out
implementation of active learning in a classroom. A number of of the research. One student who chose to opt out of the study
universities have designed classrooms intended to facilitate the was not included in the study description or analysis.
use of active-learning techniques. In these spaces, students are
seated around tables rather than in traditional rows and are Subjects
provided with technology, such as computers, large monitors, This study was carried out at a large, public, doctorate-granting
and/or large whiteboards, to allow access to digital instruc- university in the midwestern United States. Students in the
tional material and to facilitate collaboration within and study (n = 110) were predominantly non–science majors
between groups (Dori and Belcher, 2005; Beichner et al., 2007; enrolled in an integrative studies biology course designed to
Whiteside et al., 2010). For the remainder of this paper, we fulfill the university’s general education requirements.
adopt the terminology of “SCALE-UP–type” to describe these Students self-selected sections of the course without any a
flexible instructional spaces, patterned after perhaps the most priori knowledge of the research study. Slightly more students
familiar project (Student-Centered Active Learning Environ- were enrolled in the section using a traditional classroom space
ment with Upside-down Pedagogies), to design and implement (Traditional) than in the section using a SCALE-UP–type space
these spaces within university settings (Beichner et al., 2007). (SCALE-UP). Enrollment in both courses was capped at 72 stu-
Several studies suggest that SCALE-UP–type classrooms can dents. Enrollment on the first day of class was 60 students in
improve a wide range of desirable student outcomes (Dori and the SCALE-UP–type classroom and 68 students in the Tradi-
Belcher, 2005; Dori et al., 2007; Beichner et al., 2007; Gaffney tional classroom. Seven to 11 students dropped the course in
et al., 2008; Brooks, 2011; Cotner et al., 2013). each section, leaving final enrollments of 49 students in the
Not all studies have shown positive outcomes from active SCALE-UP–type classroom and 61 students in the Traditional
learning, flipped instruction, or SCALE-UP–type spaces. A large classroom. Students who completed the content knowledge
study of introductory biology courses at universities across the survey both at the start of the course and the end of the course
United States found no correlation between reported were included in analysis of performance (SCALE-UP: n = 34;
active-learning levels in the classroom and student learning Traditional: n = 37). Students who completed the classroom
gains related to key evolutionary concepts (Andrews et al., and technology and infrastructure survey at the end of the
2011). The authors of this study suggest that the ability to cre- course were included in the analysis of student perceptions
ate well-designed instruction produces positive student out- (SCALE-UP: n = 33; Traditional: n = 42). Average ACT scores,
comes; such positive outcomes are unlikely in the presence of grades, and other demographic variables for each section and
any instruction, including active learning, that lacks high-qual- the subsets of students included in these analyses are shown in
ity instructional design. Similarly, a study of the impacts of Table 1.
flipped instruction found no differences in student outcomes
between students from a section taught using flipped instruc- Study Design
tion and students from a section incorporating carefully This study used a comparative quasi-experimental design to
designed active learning and postclass homework. (Jensen test the impact of instructional space design and technology on
et al., 2015). The authors of this study suggest that carefully student performance in two different sections of the same
designed instruction using active learning can be effective course. The two sections of a nonmajors biology course were
whether implemented in or out of the classroom. Finally, two taught in parallel by the same instructor. The SCALE-UP section
studies of SCALE-UP–type classrooms suggest mixed results met from 12:40–2:00 pm on Tuesdays and Thursdays in a room

15:ar68, 2 CBE—Life Sciences Education • 15:ar68, Winter 2016


Does the Room Matter?

containing circular tables and movable chairs that allowed


TABLE 1. Average ACT scores, average incoming GPA, average course grade, and other demographic data for all students in the Traditional section and SCALE-UP section and the students

Lifelong
students to conveniently work in groups of three or four stu-

0%
0%
0%

2%
3%
3%
education
dents (Figure 1). Each table had power outlets and flat-screen
in each section who completed both the pre- and postcourse content knowledge survey, and the students in each section who completed the technology and infrastructure survey

monitors on which the instructor could display digital content

5%
5%
5%

4%
6%
6%
Senior or to which the group could connect laptop computers or tab-
Year in college

lets and view the digital content of their choice. The room had
a seating capacity of 72 students. The Traditional section met

5%
5%
5%

8%
9%
9%
Junior
from 2:40-4:00 pm on Tuesdays and Thursdays in a traditional
lecture hall with fixed desks in a tiered arrangement with no
power outlets or flat-screen monitors available to students. The
28%
22%
24%

29%
29%
27%
Sophomore
room had a seating capacity of 150 students (Figure 1). Finally,
large screens allowing the instructor to display digital media to
62%
68%
67%

57%
53%
55%
Freshman all students were available in both classrooms (Figure 1). The
SCALE-UP–type classroom had four large screens located at the
Hawaiian/ corners of the room, and the Traditional classroom had one
2%
3%
2%

0%
0%
0%
Pacific islander large screen at the front of the lecture hall.
(non-Hispanic) Both classrooms shared some common technology. Wireless
Hispanic Internet was available in both classrooms, and students were
7%
5%
5%

0%
0%
ethnicity 0% required to bring a mobile device with which they could con-
nect to the Internet. These mobile devices included laptop com-
Two or puters, tablets, and cell phones. Each student used his or her
3%
0%
5%

0%
0%
0%

more races
device to respond to in-class questions related to content acqui-
(non-Hispanic)
sition from the preclass reading and homework or as scaffold-
ing for the group assignments. Each group of students was also
2%
0%
2%

4%
3%
3%

Not reported
Ethnicity

provided with a tablet and stylus that allowed them to make


freehand drawings as part of modeling activities. Students
16%
12%
15%

without wireless devices could also use these tablets when


7%
8%
7%

International
Internet connectivity was required.
Black or African Both sections were taught by the same instructor using an
11%

10%
8%

2%
3%
3%

American identical flipped approach. Lesson plans, preclass assignments,


(non-Hispanic) homework, formative assessments, and summative assessments
Asian were identical for both sections. The goal was for student con-
5%
5%
7%

6%
6%
6%

(non-Hispanic) cept acquisition to take place before class using lecture videos,
readings, and homework. During class, students worked in for-
White mal groups to apply concepts to solve problems or build new
64%
70%
62%

71%
76%
73%

(non-Hispanic) understanding.
Formal groups were assigned by the instructor during the
second week of class based on responses to a survey students
31%
32%
19%

49%
47%
48%

Male
completed during the first week of class. The target group size
Gender

was three students, as this allowed students in the Traditional


69%
68%
81%

51%
53%
52%

Female classroom to easily sit together and share resources. Group size
varied from two to five students as students dropped the course
ACT scores were not available for all students enrolled in the course.

and some groups were combined. The amount of time students


Average course
3.05
3.28
3.33

3.36
3.51
3.53

grade
planned to spend working on the class was used to put students
with similar work ethics together. Self-reported grade point
ACT and grades

Average average (GPA) allowed groups to be mixed across potential


2.92
2.94
3.07

2.75
2.78
2.73

incoming GPA ability level.


A typical class began with students using mobile devices,
either personal devices or the tablets provided by the instructor,
25.25 (87%)
25.25 (86%)
25.37 (83%)

25.08 (80%)
25.33 (79%)
24.92 (76%)

ACT score
(percentage of
to answer several questions related to the readings and home-
sample with work. Question types, including multiple choice, multiple select,
score)a circling regions on a diagram, numeric, and short answer, were
answered using Pearson’s Learning Catalytics platform. During
this time, students were encouraged to work collaboratively
and to ask the instructor questions about anything that was
Pre–post survey

Pre–post survey
Traditional section

SCALE-UP section

unclear regarding the preclass material. Following each ques-


All students

All students
Tech survey

Tech survey

tion, the instructor would lead a whole-class discussion about


the question. This set the stage for the group work, during
which students either developed a model, analyzed an article
from the popular press, or developed a scientific argument.
a

CBE—Life Sciences Education • 15:ar68, Winter 2016 15:ar68, 3


J. R. Stoltzfus and J. Libarkin

FIGURE 1. Students in the SCALE-UP classroom section sat in movable chairs at round tables with monitors on the tables (A), while
students in the Traditional section sat in fixed desks in tired rows (B). SCALE-UP room dimensions are 41 × 56 feet, with an 8 × 14 foot
alcove for the instructor podium. Traditional room dimensions are 34 × 57 feet.

During this time, the instructor and an undergraduate learning drawings as part of modeling assignments. In the SCALE-UP–
assistant answered questions and interacted with groups, ask- type classroom, students could connect this tablet to the group
ing students about their progress and checking student under- monitor so that all group members could easily see the screen;
standing of key concepts. Following the group work, one or two in the Traditional classroom students sat close together to
groups would present their models, analyses, or arguments to view the tablet screen. In the SCALE-UP–type classroom, group
the entire class for a critique. The class was then given time to presentations could be displayed on each group’s monitor and
critique the presentation, and several groups were selected to on the large classroom screens, while in the Traditional class-
share their critiques with the whole class. All groups were then room, presentations were displayed on the large projection
allowed to revise their models, analyses, or arguments based on screen at the front of the room. Overall, implementation of
the class critique and submit their final products to the learn- instructional design was as similar as possible in the two sec-
ing-management system for evaluation and feedback. These tions, considering the differences in infrastructure and tech-
daily group assignments were evaluated for completeness and nology in the two classrooms.
quality using a predefined rubric. The majority of the points Because classroom assessments were used to compare sec-
were earned for following instructions, with the remaining tions and the instructor knew of the study, the potential for bias
points based on the quality of the work. during the grading process exists. To reduce this possibility,
For example, students were required to complete online exams were assigned a random code and deidentified, and
readings and homework about gene expression on their own. In exams from the two sections were randomly mixed before
class, groups of students 1) answered Learning Catalytics ques- being graded. Assessments other than exams could not be
tions about preclass material; 2) developed models explaining deidentified, but bias was reduced by alternating between the
why an undifferentiated stem cell can become either an eye cell different sections during the grading process.
or a heart cell; 3) evaluated models developed by other groups;
and 4) revised their own models. Before the next class, individ- Measures of Student Performance
ual students then read an article from the popular press related Multiple measures of student outcomes were developed, includ-
to stem cell differentiation and use of stem cells in regenerative ing validation steps as needed, with analyses carried out both
medicine. During class, groups of students 1) answered Learn- within and between sections. All statistical analysis was carried
ing Catalytics questions about the article; 2) used their models out using SPSS, version 21.
to analyze the article and make scientific arguments supporting
or refuting claims made in the article; 3) evaluated arguments Classroom Effort. Measures of individual effort and group
made by other groups; 4) revised their own arguments; and effort were collected to determine similarity of effort across the
5) submitted their groups’ arguments to the learning-manage- two sections and the impact of effort on postcourse content
ment system for evaluation and feedback. knowledge. Individual assignments included online homework,
Instruction was designed to maximize opportunities to use a short paper, and exam grades. In addition, student responses
technology for sharing ideas, providing feedback, and foster- sent in through Learning Catalytics with their personal mobile
ing group interaction. Students in both sections collaborated devices were used to determine individual participation. Individ-
in groups to answer questions and then used mobile devices to ual effort was calculated by multiplying the number of days a
send in answers and receive feedback. Each group in both sec- student interacted using Learning Catalytics by the student’s
tions was provided with a Microsoft Surface Pro 2 tablet and total scores on individual assignments. Group assignments
stylus that allowed groups to create and submit freehand included daily group assignments and a final group presentation.

15:ar68, 4 CBE—Life Sciences Education • 15:ar68, Winter 2016


Does the Room Matter?

A portion of the daily group assignment scores was earned sim- TABLE 2. Factor loadings for eight items that factored together to
ply by turning in the group assignment. Group effort was calcu- produce a single scale and that were used as the bases for pre- and
lated by combining group scores on all group assignments. Thus, postcourse content knowledge scores
both individual effort and group effort included a mixture of par- Question Loading Communality
ticipation and performance metrics and provide insight beyond
2 0.643 0.413
purely performance-based measures into how much effort indi-
4 0.491 0.241
viduals and groups put into the course. Group effort and individ-
5 0.478 0.228
ual effort for individuals from each section were compared using
6 0.401 0.161
independent-samples t tests.
8 0.461 0.212
12 0.502 0.252
Pre–Post Content Knowledge. This course focused primarily
14 0.584 0.341
on biotechnology. Eighteen multiple-choice items covering rel-
18 0.551 0.304
evant topics were chosen from several published biology con-
cept inventories (Klymkowsky et al., 2003; Bowling et al.,
2008; Smith et al., 2008; Shi et al., 2010). These items were
modified to increase validity through alignment with item con- ogy relevant to biotechnology. The total number correct out of
struction standards (Haladyna and Downing, 1989; Haladyna these eight items was used to generate pre- and postcourse con-
et al., 2002; Frey et al., 2005) and expected student knowledge tent knowledge scores for each student in both course sections.
based on course content. A 19th quality control item asking The complete content knowledge survey is available in the Sup-
students to select a specific answer was also included to iden- plemental Material.
tify students who were not carefully reading and answering the Precourse content knowledge scores and postcourse content
questions. This 19-item survey was administered online via the knowledge scores from each section were compared using
course management system before the second day of class and mixed-design analysis of variance (ANOVA) for a between–
before the last week of class in order to collect data on students’ within subjects analysis. The influence of precourse knowledge
pre- and postcourse knowledge of course content. Students scores, gender, class level, group effort, individual effort, and
were instructed to “answer each question based on what you section on postcourse knowledge scores was evaluated through
know without using any additional resources (Google, a text- linear regression. These covariates were used because of demo-
book, your friends, etc.)” to “make an honest effort to carefully graphic differences across individuals and sections.
read and answer all questions” and were told that they would
earn the extra credit “regardless how many correct answers Measures of Student Perceptions
you select as long as you make an honest effort.” Students Students’ perceptions of how classroom technology and infra-
earned extra credit worth 0.33% of their overall course scores structure influenced their experiences in the course were
for completion of each survey regardless of their scores on the obtained via a survey administered at the end of the semester
assessment. just before submission of grades. The survey was administered
To ensure that a single construct was being measured by this online through the university’s learning-management system.
set of items, we used factor analysis to establish unidimension- The survey consisted of 13 multiple-choice questions and four
ality. Unidimensionality is necessary for establishing that the open-ended questions. The questions focused on the usefulness
set of items together measure a single, meaningful construct. of specific aspects of the classroom design for students, includ-
Without unidimensionality, the score on the content knowledge ing how students thought their performance in the course
survey would have little meaning. We investigated unidimen- would have changed if that aspect of the course had changed.
sionality of the 18 conceptual items through exploratory factor Students earned 0.33% extra credit on their overall course
analysis of posttest data, with iterative removal of items as the scores for completing the survey.
number of constructs measured by the test was evaluated. Five multiple-choice questions were identical between the
Despite emerging from established instruments, the set of items sections and asked students about some aspect of the class-
did not align with the instruments from which they were gath- room, technology, or instruction that was identical between the
ered and overall exhibited strong nonunidimensional behavior. sections. Seven questions asked students about unique aspects
Ultimately, a set of eight items was identified that adequately of their specific classroom setting, such as the value of group
measured a single construct (Table 2). Given that unidimen- monitors in the SCALE-UP–type classroom. In two questions,
sionality was not established in any of the four studies from students were asked how they thought their performance in the
which items were sourced, the removal of 10 items from the set course might have changed if their classroom had contained
was not surprising. For this set of eight items, the Kaiser-Mey- features from the other classroom type. One multiple-choice
er-Olkin measure of sampling adequacy was 0.63, above the question was a quality-control question designed to check for
0.6 value recommended for factor analysis, and Bartlett’s test of students who were clicking through the survey without reading
sphericity was significant (χ2(28) = 61.3, p < 0.001). The scree the questions. Finally, four open-ended questions allowed stu-
plot for this set of items suggested one dominant factor, and dents to explain aspects of the classroom and technology that
only one eigenvalue was meaningfully above one. Finally, a were helpful or that could be improved. The responses to open-
Cronbach’s alpha of 0.61 was calculated for this scale; an alpha ended questions were analyzed by counting the number of
above 0.6 is considered adequate for small samples (Hair et al., instances in which students mentioned a particular aspect of
2006). Taken together, this set of eight items is considered to the course. The complete surveys are available in the Supple-
measure a unidimensional construct related to molecular biol- mental Material.

CBE—Life Sciences Education • 15:ar68, Winter 2016 15:ar68, 5


J. R. Stoltzfus and J. Libarkin

TABLE 3. Variable means across sections on total scores to address potential reader concerns that those
Section n Mean SD SEM data would support SCALE-UP spaces as more effective. Results
were nearly identical to the valid eight-item form. Note that this
Precourse content Traditional 37 2.757 1.422 0.234
18-item form cannot be considered a valid measure in the
knowledge SCALE-UP 34 2.147 1.676 0.302
absence of unidimensionality, hence the reporting of statistical
Postcourse content Traditional 37 4.595 1.832 0.301
knowledge
analysis for the eight-item form only.
SCALE-UP 34 3.706 2.048 0.344
Group effort Traditional 61 39.508 7.066 0.905
SCALE-UP 49 41.949 2.900 0.414 Linear Regression Analysis. Linear regression was used to
Gender Traditional 61 1.690 0.467 0.060 investigate other variables that may impact post knowledge and
SCALE-UP 49 1.510 0.505 0.072 explain the difference in Traditional and SCALE-UP sections.
Class level Traditional 61 1.510 0.766 0.098 Linear regression indicates that interaction effects are not sig-
SCALE-UP 49 1.670 0.944 0.135 nificant; main effects of gender, class level, and group effort
Individual effort Traditional 61 802.062 236.549 30.287 were also insignificant predictors of post knowledge. A stepwise
SCALE-UP 49 841.360 223.219 31.890 regression including only the significant variables of pre knowl-
edge, individual effort, and section was then run (Table 4).
Overall adjusted model fit was R2 = 0.124, meaning that the
model explains 12.5% of the variance in postknowledge scores.
RESULTS Prior knowledge and individual student effort are the only sig-
Measures of Student Performance nificant variables, together explaining 12.4% of the adjusted
Classroom Effort. Independent-samples t test comparison for variance in postknowledge scores. All other variables, including
individual effort and group effort variables were used to com- section, do not explain any significant portion of the variance,
pare student effort across the Traditional section and SCALE-UP suggesting that, when pre knowledge and individual effort are
section. Results indicate no difference for either individual or considered, section placement plays little role in explaining
group effort between students in the SCALE-UP–type classroom postknowledge scores.
and students in the Traditional classroom (Table 3). These
results suggest students’ efforts across sections were compara- Student Perceptions: Technology and Infrastructure Survey
ble and students in both sections performed equally well on Students’ perceptions of classroom technology and infrastruc-
classroom assessments, including exams, homework, in-class ture were measured at the end of the semester in both sections.
activities, and group projects. About two-thirds of the students in each section completed the
survey with similar response rates across sections, with n = 33
Pre–Post Content Knowledge. Only students who completed (67%) of SCALE-UP students and n = 42 (69%) of Traditional
both the pre- and postinstruction content knowledge survey students completing the survey. With the exception of a larger
were included in the analysis. Response rates between class- percentage of females completing the technology and infra-
rooms yielded similar total numbers of students, with n = 34 structure survey in the Traditional section, no obvious differ-
(69%) of SCALE-UP students and n = 37 (61%) of Traditional ences in average ACT scores, grades, or other demographic
students completing both tests. No obvious differences in aver- variables were observed between subsets of students who com-
age ACT scores, grades, or other demographic variables were pleted the survey and the entire sections (Table 1).
observed between the subset of students who completed both Students reported a similar level of interaction with the
the pre- and posttest and the sections as a whole (Table 1). instructor in both sections, in line with the instructor’s inten-
An independent-samples t test comparison of precourse con- tions. Similarly, all respondents in both sections reported own-
tent knowledge scores indicates no difference in precourse con- ing Wi-Fi–enabled mobile devices that could be used for Inter-
tent knowledge between students in the SCALE-UP–type class- net-based classroom activities. Despite access to personal
room and students in the Traditional classroom (Table 3). devices, more than 70% of students in each section reported
These data suggest that students in both sections began the that the tablet was very useful during group activities and that
course with similar content knowledge. team performance on classroom assignments would have suf-
A mixed-design ANOVA for between–within subjects analy- fered without this technology (Tables 5 and 6). When asked in
sis was conducted to evaluate the impact of the classroom type the open-ended questions what technology in the course helped
(SCALE-UP, Traditional) on student content knowledge across them learn, eight of 23 respondents from the SCALE-UP section
two time periods (preinstruction and postinstruction); the and 10 of 21 respondents from the Traditional section men-
impact of other variables is addressed below in the section on tioned the tablets in a favorable manner, including indications
Linear Regression Analysis. No significant interaction between that they were useful for freehand drawing. For example, one
section and time was observed, Wilks lambda = 0.996, F(1, 69) student reported, “The tablets were very helpful when drawing
= 0.302, p = 0.59, partial eta-squared = 0.004. A significant scientific models because we didn’t have to use a key pad or
main effect for time existed, Wilks lambda = 0.61, F(1, 69) = mouse which would have made it very difficult to draw detailed
44.72, p < 0.001, partial eta-squared = 0.393. Although both characteristics.” Taken as a whole, responses suggest that stu-
groups exhibited increases in content knowledge after instruc- dents valued the tablets, because they were able to use them to
tion, students in the Traditional group exhibited greater gains create freehand drawings during the modeling activities,
than those in the SCALE-UP group, with moderate effect size. although some students found the tablets hard to use.
Although the full 18-item form was not unidimensional (as Students in the SCALE-UP section responded favorably to
noted in Materials and Methods), mixed-design ANOVA was run questions regarding the group monitor’s utility and impact on

15:ar68, 6 CBE—Life Sciences Education • 15:ar68, Winter 2016


Does the Room Matter?

TABLE 5. Percentage of respondents indicating each level of utility


for different aspects of technology or infrastructure in their

0.282
0.309
−0.054
0.041
0.009
0.207
classrooms

β
Q6: Tablet Q4: Group Q9: Classroom
utility monitor utilitya type utility

0.144
0.002
0.082
0.248
0.459
0.157
0.125
2.923
SE B

Traditional

Traditional

Traditional
SCALE-UP

SCALE-UP

SCALE-UP
Utility of resource
0.343
0.003
−0.029
0.088
0.037
0.268
B
4

Very useful 74% 70% 33% 73% 21% 73%


Somewhat useful 23% 27% 47% 24% 51% 27%
Not useful 2% 3% 19% 3% 26% 0%
0.331
0.318
−0.105
0.006
0.038

Did not answer 0% 0% 2% 0% 2% 0%


β

a
In the case of group monitors, students in the Traditional classroom were asked
how useful a group monitor would have been had it been added to the classroom.
0.141
0.002
0.082
0.248
0.461

0.099
0.055
SE B

their performance, while responses from students in the Tradi-


tional section to question about the impact of adding monitors
to the classroom were mixed (Tables 5 and 6). Several students
from the SCALE-UP section noted issues with their group mon-
0.402
0.003
−0.055
0.012
0.152
B
3

itors not working in the open-ended portion of the survey.


Students in the SCALE-UP section also responded favorably
to questions regarding the classroom layout, while responses
0.333*
0.327*

from students in the Traditional section to questions about the


−0.108
β

impact of the classroom were less favorable (Tables 5 and 6).


Answers on the open-ended section of the survey suggest that
the classroom layouts influenced interactions between group
members. In the SCALE-UP section, nine of the 22 responses
0.138
0.002
0.080

0.124
2.845
SE B

regarding aspects of the classroom that helped learning men-


tioned interactions with other students, for example, “The big
tables were very helpful rather than desks. It’s easier for teams
to work together that way.” SCALE-UP students provided no
0.404
0.004
−0.057
B
2

suggestions related to improving student–student interactions.


However, five of the 20 responses from the Traditional section
TABLE 4. Summary of hierarchical regression analysis of post knowledge

regarding how the classroom could be improved mentioned the


seating arrangements and interactions with other students. For
0.301*

example, one student wrote: “I think that it would be helpful if


β

we were able to … move our chairs around. At times it could be


6.877*
0.139
SE B

0.077

TABLE 6. Percentage of respondents indicating each level of


performance change if different aspects of technology or
infrastructure in their classrooms were removeda
Q5: Add or Q10: Change
0.366

Q7: Remove remove group the type of


B
1

tablets monitors classroom


Traditional

Traditional

Traditional

Impact on
SCALE-UP

SCALE-UP

SCALE-UP
Section (Traditional, SCALE-UP)

performance if
resource was
changed
Gender (male, female)

Suffered 77% 65% 7% 54% 5% 78%


Not changed 23% 35% 65% 38% 53% 22%
F for change in R2
Individual effort

Improved 0% 0% 28% 8% 40% 0%


Pre knowledge

Group effort

Did not answer 0% 0% 0% 0% 0% 0%


Adjusted R2

*p < = 0.05.
Class level
Variable

a
In the case of group monitors, students in the Traditional classroom were asked
how their performance would have changed had group monitors been added to
the classroom.

CBE—Life Sciences Education • 15:ar68, Winter 2016 15:ar68, 7


J. R. Stoltzfus and J. Libarkin

a bit disjointed trying to work with three people and not having tent knowledge measure rather than course grades as our out-
all people be able to face each other/see the device.” Another come variable. While grades provide some insight into student
student said, “If we had a different classroom with tables and learning, grades may exhibit more noise than a psychometri-
chairs, I feel like that would have helped communication when cally defined unidimensional scale.
we needed to work on team projects and activities.” Subtle differences in instructional approach may also explain
The presence of power outlets in the SCALE-UP–type class- the differences across studies. While all three studies used
room also resulted in differences in student experiences. Results “active-learning” approaches, the description of the active-learn-
from the survey indicate that 54% of respondents in the ing strategies used in each study varied. We describe our
SCALE-UP–type classroom charged their devices during class active-learning strategy as flipped instruction; the active-learn-
once or more a week and that 12% of respondents from the ing strategy in Brooks (2011) was described as a hybrid lec-
Traditional classroom had issues with their mobile devices and ture/problem-solving approach; and Cotner et al. (2013)
tablets losing power during class at least once during the semes- describe use of active-learning techniques without further
ter. When asked in the open-ended portion of the survey how explanation. In our instructional approach, groups were pro-
the classroom could be improved, three of the 20 respondents vided with instructions for developing a model or analyzing an
from the Traditional classroom noted the need for power out- article and often spent one-half to two-thirds of each class
lets in the open-ended portion of the survey. period working on the activity before a group was chosen to
report to the entire class. The instructor and students interacted
DISCUSSION throughout each class as students asked questions and shared
The major result of our study is that the SCALE-UP–type class- progress. Students in the current study reported similar levels
room did not enhance student performance relative to the Tra- of interaction across the SCALE-UP and Traditional sections. In
ditional-type classroom. We found no significant difference in contrast, Brooks (2011) and Cotner et al. (2013) found that
individual or group effort in the two sections and no significant instructors interacted more with students in the SCALE-UP–
difference on classroom assessment performance between the type classrooms than in traditional rooms. This equality of stu-
two sections. Although mixed-design ANOVA suggests a differ- dent–instructor interactions across sections may have contrib-
ence, in favor of the Traditional section, on postknowledge uted to the similar learning across our two sections. If this is the
scores, linear regression results indicate that individual stu- case, there are important implications for large-enrollment sec-
dents’ prior knowledge and individual efforts explain section tions, as one instructor can interact effectively only with a lim-
differences. We acknowledge that our study suffers from small ited number of groups during any one class period. One possi-
sample size, which may have inhibited our ability to detect dif- ble solution is the use of instructional teams of well-trained
ferences across treatments and pseudoreplication (Hurlbert, graduate and undergraduate students who interact with stu-
1984), as we applied statistical analysis to data gathered in a dents during class (Smith et al., 2005). However, more research
study that lacks replication across the hypothesis space being is needed to determine the impact of student–instructor inter-
tested. However, these results are significant, as they contradict actions on student performance and the effectiveness of substi-
the results of two similar previously published studies (Brooks, tuting graduate or undergraduate students for instructors
2011; Cotner et al., 2013). All three studies are relatively small during these interactions (Kendall and Schussler, 2012; Knight
and suffer from pseudoreplication, common faults of this type et al., 2015).
of study due to constraints such as the limited number of sec- Several factors likely contributed to common levels of inter-
tions typically taught by the same instructor, changes in teach- actions across sections. First, the tablet provided a focal point
ing assignments over time, and the availability of specific for the groups during many activities across both sections. In
instructional spaces. Completing more studies that include the Traditional section, the rows were curved. This allowed stu-
independent replicates, ideally across institutions, and paying dents to sit in three consecutive desks as well as adjacent rows
careful attention to details, as per the discussion that follows, and still be able to simultaneously view a common tablet or
would enable examination of why the outcomes of these three device. The Traditional classroom design and the use of a com-
small studies differ. mon device may have enhanced student–student interactions in
The two previous studies (Brooks, 2011; Cotner et al., 2013) the Traditional section and contributed to the lack of differ-
used research questions and experimental design similar to this ences in student outcomes.
study and found that student performance was enhanced in the Students in this study exhibited positive views of the tech-
SCALE-UP–type classrooms. These previous studies compared nology and infrastructure in SCALE-UP–type classrooms and
performance of students in active-learning spaces and tradi- felt the room layout enhanced their performance even when
tional classrooms when the courses were taught using the same our postcourse analysis indicated that it did not. Previous stud-
instructor, teaching methods, and assessments (Brooks, 2011; ies have also shown that students have positive views of
Cotner et al., 2013). All three studies were carried out in intro- SCALE-UP–type rooms (Dori and Belcher, 2005; Beichner
ductory biology courses for non–science majors; and all three et al., 2007; Whiteside et al., 2010). In the current study,
studies used teaching approaches that were learner centered. SCALE-UP students reported that technology in the room and
Differences between the three studies cannot be attributed to the movable chairs and round tables helped them learn and
class size, as Brooks (2011) reported sections of similar sizes to that their performance in the class would have suffered if these
those discussed here. items had not been present. Similarly, Traditional classroom
Some important differences in data collection and analysis students indicated a preference for a more flexible seating
between our study and both prior studies might explain the arrangement. Taken together, these data support the idea that
different results. In the current study, we used a validated con- students prefer a SCALE-UP–type classroom layout for active

15:ar68, 8 CBE—Life Sciences Education • 15:ar68, Winter 2016


Does the Room Matter?

learning and suggest that this layout fosters interactions CONCLUSIONS


between group members. On the basis of results of our study in conjunction with results
Finally, our study agrees with results from other recent stud- from prior work (Andrews et al., 2011; Brooks and Solheim,
ies showing that student ownership of mobile devices is 2014; Straumsheim, 2014; Jensen et al., 2015), we conclude
approaching 100% (Cassidy et al., 2014) and that students are that adding technology to a classroom, remodeling classrooms
using their own mobile devices in classrooms to support and to facilitate interactions, flipping instruction, or even adding
enhance their classroom learning (Biddix et al., 2015). Early active learning to a course is not a panacea that produces better
reports on learning gains in SCALE-UP–type classrooms took outcomes for students. Factors unique to each instructional sit-
place in an era when including technology in the classroom was uation likely influence outcomes, and care must be taken when
important, because the technology allowed students to retrieve assuming strategies shown to work in one situation will transfer
and interact with digital content that they would not otherwise to learning gains in similar situations.
have been able to access (Dori and Belcher, 2005; Beichner Building instructional spaces requires significant expendi-
et al., 2007). In 2001, around the time the initial versions of tures, especially when technology is included in the classroom.
SCALE-UP–type classrooms were developed, a minority of stu- In 2007, the University of Minnesota renovated two classrooms
dents owned laptops. For example, only about one-third of stu- following a SCALE-UP–type model. Design, technology pur-
dents at UVA owned a laptop (University of Virginia Instruc- chase and installation, and furniture for a room with 45 seats
tional Technology Services, 2009). In 2001, students also did cost $147,000, and a room with 117 seats cost $269,000
not own smartphones, as such devices did not exist; Blackberry (Whiteside et al., 2009). These costs did not include other
released the first cell phone with email capability in 2001, and expenses associated with renovating the rooms. At our institu-
the first iPhone was not released until 2007. This meant that, in tion, renovating the room in which the SCALE-UP section was
order for students to interact with digital media, technology taught cost $192,000, and renovating a smaller SCALE-UP–
needed to be provided by the instructor. Today, most students type classroom with 36 seats cost $128,000. The cost is sub-
own a laptop and/or a smartphone. In 2014, student ownership stantially reduced to $30,000, or one-fourth to one-sixth of
of laptop computers was greater than 95%, and ownership of the cost, when technology is not included in the renovation
cell phones was greater than 98% (Cassidy et al., 2014). In our (S. Grabski, personal communication). Clearly, room renova-
study, 100% of students reported owning a laptop. Because tions to facilitate group work are much cheaper in the absence
Wi-Fi is also present in classrooms on most college campuses, of embedded technology. This greatly reduced cost and student
students can easily access and share digital information using preference for SCALE-UP spaces, coupled with the equivalent
their own devices. The current level of individual access to learning observed across the two sections in our study, suggests
technology suggests that technology provided in most that altering classrooms to facilitate student–student and stu-
SCALE-UP–type classrooms, such as monitors, may no longer dent–instructor interactions may be worth the cost. This is espe-
be necessary. cially true in spaces designed for large numbers of students,
It is clear that active learning can improve student outcomes where instructors will be unlikely to replicate the interactions
(Michael, 2006; Freeman et al., 2014). However, our under- made possible by the small number of students enrolled in the
standing of the specific aspects of classroom technology, Traditional section in the current study.
instructional approaches, and contextual factors that can lead As described by other authors and experienced firsthand in
to improvements in student outcomes is limited. For example, a this study, there is a significant cost for remodeling classrooms
major feature of flipped instruction that is thought to increase to facilitate active learning and adding technology to classrooms
student learning is moving the less difficult task of concept (Cotner et al., 2013) and to developing flipped instruction
acquisition out of the classroom and using valuable class time (Jensen et al., 2015). Until we better understand the mecha-
to focus on the more difficult task of concept application and nisms by which these changes produce improved student out-
problem solving. Counter to this idea, a recent study found no comes, we should be cautious with our investments of scarce
differences in student performance when class time was spent resources. Based on evidence that highly skilled instructors with
focusing on either content acquisition or content application intimate understanding of education research and active-learn-
and problem-solving (Jensen et al., 2015). The study found that ing pedagogy can have significant favorable impact on student
flipped instruction did not improve student performance over learning, scarce resources may be best spent on 1) training fac-
that achieved in a previous section of the course that incorpo- ulty, 2) providing flexible spaces rather than embedding expen-
rated active-learning strategies. The authors of the study there- sive technology into rooms, and 3) maintaining smaller sections
fore suggested that well-designed active learning is the most or providing well-trained graduate teaching assistants or under-
important feature of effective instruction, not where (in class or graduate learning assistants to maintain frequent and high-qual-
at home) the active learning occurs. ity interactions between students and the instructional staff
Prior work coupled with the current study suggests a need during class time. Finally, we should be cautious and not mea-
for future work. Specifically, studies should investigate which sure progress in education simply by the amount of active learn-
specific aspects of instructional approaches, instructional tech- ing reported in classrooms or by the number of remodeled
nology, and learning spaces increase student learning. In addi- instructional spaces made available for faculty. Scientific teach-
tion, the specific methods used to measure student performance ing requires constant evaluation of student outcomes to deter-
and learning gains likely impact findings, and future studies mine what works and what does not work (Handelsman et al.,
should incorporate a broad spectrum of research-quality met- 2004; American Association for the Advancement of Science,
rics to help delineate how instructional approaches, technol- 2011). We encourage the use of validated research assessments
ogy, and instructional spaces impact student learning. in tandem with grades to investigate advances in undergraduate

CBE—Life Sciences Education • 15:ar68, Winter 2016 15:ar68, 9


J. R. Stoltzfus and J. Libarkin

learning. The use of multiple measures of learning is likely our Handelsman J, Ebert-May D, Beichner R, Bruns P, Chang A, DeHaan R,
Gentile J, Lauffer S, Stewart J, Tilghman SM, et al. (2004). Scientific
best avenue for effective assessment and research into the
teaching. Science 304, 521–522.
impacts of instruction on students.
Hibbard L, Sung S, Wells B (2016). Examining the effectiveness of a semi-self-
paced flipped learning format in a college general chemistry sequence.
J Chem Educ 93, 24–30.
ACKNOWLEDGMENTS Hurlbert SH (1984). Pseudoreplication and the design of ecological field
Funding from an internal LPF-CMP2 Innovation Grant awarded experiments. Ecol Monogr 54, 187–211.
through the CREATE for STEM Institute provided partial sup- Jensen JL, Kummer TA, Godoy PDdM (2015). Improvements from a flipped
port for this project. We also thank faculty in the Center for classroom may simply be the fruits of active learning. CBE Life Sci Educ
14, ar5.
Integrative Studies in General Science and members of the Geo-
Johnson DW, Johnson RT, Smith KA (1991). Cooperative Learning: Increasing
cognition Research Lab for their assistance with this study and
College Faculty Instructional Productivity (ASHE-ERIC Higher Education
review of this article. Report No. 4), Washington, DC: George Washington University.
Kendall KD, Schussler EE (2012). Does instructor type matter? Undergraduate
student perception of graduate teaching assistants and professors. CBE
REFERENCES Life Sci Educ 11, 187–199.
American Association for the Advancement of Science (2011). Vision and Change Klymkowsky MW, Garvin-Doxas K, Zeilik M (2003). Bioliteracy and teaching
in Undergraduate Biology Education: A Call to Action, Washington, DC. efficacy: what biologists can learn from physicists. Cell Biol Educ 2,
Andrews TM, Leonard MJ, Colgrove CA, Kalinowski ST (2011). Active learning 155–161.
not associated with student learning in a random sample of college biol- Knight JK, Wise SB, Rentsch J, Furtak EM (2015). Cues matter: learning assis-
ogy courses. CBE Life Sci Educ 10, 394–405. tants influence introductory biology student interactions during click-
Beichner RJ, Sual JM, Abbot DS, Morse JJ, Deardorff DL, Allain RJ, Bonham er-question discussions. CBE Life Sci Educ 14, ar41.
SW, Dancy MH, Risley JS (2007). The Student-Centered Activities for Michael J (2006). Where’s the evidence that active learning works? Adv
Large Enrollment Undergraduate Programs (SCALE-UP) Project. In: Re- Physiol Educ 30, 159–167.
views in PER, vol. 1: Research-Based Reform of University Physics, ed. E
Quillin K, Thomas S (2015). Drawing-to-learn: a framework for using drawings
Redish and P Cooney, College Park, MD: American Association of Phys-
to promote model-based reasoning in biology. CBE Life Sci Educ 14, es2.
ics Teachers. www.percentral.com/PER/per_reviews/media/volume1/
SCALE-UP-2007.pdf (accessed 7 September 2014). Reynolds JA, Thaiss C, Katkin W, Thompson RJ (2012). Writing-to-learn in
undergraduate science education: a community-based, conceptually
Biddix JP, Chung CJ, Park HW (2015). The hybrid shift: evidencing a stu-
driven approach. CBE Life Sci Educ 11, 17–25.
dent-driven restructuring of the college classroom. Comput Educ 80,
162–175. Shi J, Wood WB, Martin JM, Guild NA, Vicens Q, Knight JK (2010). A diagnos-
tic assessment for introductory molecular and cell biology. CBE Life Sci
Bowling BV, Acra EE, Wang L, Myers MF, Dean GE, Markle GC, Moskalik CL,
Educ 9, 453–461.
Huether CA (2008). Development and evaluation of a genetics literacy
assessment instrument for undergraduates. Genetics 178, 15–22. Smith AC, Stewart R, Shields P, Hayes-Klosteridis J, Robinson P, Yuan R
(2005). Introductory biology courses: a framework to support active
Brooks DC (2011). Space matters: the impact of formal learning environ-
learning in large enrollment introductory science courses. Cell Biol Educ
ments on student learning. Br J Educ Technol 42, 719–726.
4, 143–156.
Brooks DC, Solheim CA (2014). Pedagogy matters, too: the impact of adapt-
Smith MK, Wood WB, Adams WK, Wieman C, Knight JK, Guild N, Su TT
ing teaching approaches to formal learning environments on student
(2009). Why peer discussion improves student performance on in-class
learning. New Dir Teach Learn 137, 53–61.
concept questions. Science 323, 122–124.
Cassidy ED, Colmenares A, Jones G, Manolovitz T, Shen L, Vieira S (2014).
Smith MK, Wood WB, Knight JK (2008). The Genetics Concept Assessment:
Higher education and emerging technologies: shifting trends in student
a new concept inventory for gauging student understanding of genetics.
usage. J Acad Libr 40, 124–133.
CBE Life Sci Educ 7, 422–430.
Cotner S, Loper J, Walker JD, Brooks DC (2013). “It’s not you, it’s the room”—
Stockwell BR, Stockwell MS, Cennamo M, Jiang E (2015). Blended learning
are the high-tech, active learning classrooms worth it? J Coll Sci Teach
improves science education. Cell 162, 933–936.
42, 82–88.
Straumsheim C (2014). Room to experiment. Inside Higher Ed. www.inside-
Crouch CH, Mazur E (2001). Peer instruction: ten years of experience and
highered.com/news/2014/12/12/interactive-learning-spaces-center
results. Am J Phys 69, 970–977.
-ball-state-us-faculty-development-program (accessed 7 September
Dori YJ, Belcher J (2005). How does technology-enabled active learning 2015).
affect undergraduate students’ understanding of electromagnetism con-
Strayer J (2012). How learning in an inverted classroom influences coopera-
cepts? J Learn Sci 14, 243–279.
tion, innovation and task orientation. Learn Environ Res 15, 171–193.
Dori YJ, Hult E, Breslow L, Belcher J (2007). How much have they retained?
Tanner KD (2009). Talking to learn: why biology students should be
Making unseen concepts seen in a freshman electromagnetism course
talking in classrooms and how to make it happen. CBE Life Sci Educ 8,
at MIT. J Sci Educ Technol 16, 299–323.
89–94.
Freeman S, Eddy SL, McDonough M, Smith MK, Okoroafor N, Jordt H,
University of Virginia Instructional Technology Services (2009). UVa First-Year
Wenderoth MP (2014). Active learning increases student performance in
Student Computer Inventory: Year-to-Year Comparison, 1997–2009.
science, engineering, and mathematics. Proc Natl Acad Sci USA 111,
http://its.virginia.edu/students/inventory/compare (accessed 7 September
8410–8415.
2015).
Frey BB, Petersen S, Edwards LM, Pedrotti JT, Peyton V (2005). Item-writing
van Vliet EA, Winnips JC, Brouwer N (2015). Flipped-class pedagogy enhanc-
rules: collective wisdom. Teach Teach Educ 21, 357–364.
es student metacognition and collaborative-learning strategies in higher
Gaffney JD, Richards E, Kustusch MB, Ding L, Beichner RJ (2008). Scaling up education but effect does not persist. CBE Life Sci Educ 14, ar26.
education reform. J Coll Sci Teach 37, 48–53.
Whiteside A, Brooks DC, Walker JD (2010). Making the case for space: three
Hair JF, Black WC, Babin BJ, Anderson RE, Tatham RL (2006). Multivariate years of empirical research on learning environments. EDUCAUSE Q.
Data Analysis, 6th ed., Upper Saddle River, NJ: Pearson Education. www.educause.edu/ero/article/making-case-space-three-years
Haladyna TM, Downing SM (1989). A taxonomy of multiple-choice item-writ- -empirical-research-learning-environments (accessed 22 August 2015).
ing rules. Appl Meas Educ 2, 37–50. Whiteside A, Jorn LA, Duin AH, Fitzgerald S (2009). Using the PAIR-up model
Haladyna TM, Downing SM, Rodriguez MC (2002). A review of multi- to evaluate active learning spaces. EDUCAUSE Q. http://er.educause
ple-choice item-writing guidelines for classroom assessment. Appl Meas .edu/articles/2009/3/using-the-pairup-model-to-evaluate-active
Educ 15, 309–333. -learning-spaces (accessed 22 August 2015).

15:ar68, 10 CBE—Life Sciences Education • 15:ar68, Winter 2016

You might also like