Browsed by
Month: September 2016

Why >= How: Homework 

Why >= How: Homework 

I hear many instructors lamenting that their students are not doing their homework to the instructor’s satisfaction. If we agree that homework is an important part of the learning process, then it is important to tackle this problem. 

Do your students know why they are doing homework? Don’t be so sure that they do. Many students do it because it’s part of the game, because they are told to do it, because they get points for doing it. They should be doing homework because homework can increase their understanding. You cannot assume that they know this. 

On the first day of class I often ask my algebra or pre-algebra students “What do good students do?” They can develop quite a list of good student behaviors – coming to class every day, taking notes, doing homework, studying, etc. But when I ask why they take notes I hear crickets- everybody seems to do it, I’ve always done it, … We have a quick discussion about what notes are for, how to use them after class, and what belongs in them. 

In my class homework does not directly impact a student’s grade unless they are passing exams. I make sure that students understand that the goal of the homework is to increase their understanding, and that will be measured on the exams. Equally as important: the goal of doing homework is not to simply accumulate points. 

Because my students know why I assign homework they understand its importance. They do not view it as some sort of busywork. And they do it. And they do it well. Of course we have discussions about how to approach doing homework in such a way that students will maximize their learning, just not before they understand why they are doing it. 

Building an Early Inferential Approach into the Calendar

Building an Early Inferential Approach into the Calendar

I have had a few questions about how I am managing to work all of these early inferential projects into my Intro Stats course.

1) Switching from Chapter Exams to a Midterm and a Final

In the first 7 chapters of our textbook I used to give 4 exams. That means that I would use 4 days for exams and approximately 6 days for review. I have 4 days built into my calendar for review (2 days) and the midterm exam (2 days). That is a net gain of 6 days in the first half of the course.

I have 8 project days scheduled in the first half of the course so that only puts me two days behind, but I have been able to avoid spending more than one day on any section so there are unofficial gains there.

I have checked in with two colleagues and I am one day behind one of them and even with the other.

In the second half of the course I will apply the days saved from chapter tests to cover alternatives to the traditional hypothesis tests, including simulations and non-parametric tests.

2) I Will Not Have To Introduce Hypothesis Testing in the Second Half of the Course

I typically spend 4 days to cover the first hypothesis test (the 1-proportion test), but I should be able to jump right in and cover that test in one day.

My Calendar for Weeks 1-5

Here is the schedule I have followed to this point. I have put the project days in bold.

Date Topic
15-Aug Day 1 Syllabus etc.
16-Aug 1.1 Intro to Stats
17-Aug 1.2 Observational Studies, Experiments
18-Aug 1.3/1.4 Sampling Techniques
22-Aug 1.6 Experimental Design
23-Aug 2.1 Qualitative Graphs
24-Aug Project 1: Simulation for 1-Proportion
25-Aug Project 2: Randomization Test for 2-Proportions
29-Aug 2.2 Quantitative Graphs
30-Aug 3.1 Measures of Center
31-Aug Project 3: Bootstrap Method for Estimating a Mean or Median
1-Sep Project 4: Using Simulation for a Population Mean
5-Sep holiday
6-Sep Project 5: Using Bootstrap Method for a Paired Difference Test
7-Sep 3.2 Measures of Dispersion
8-Sep 3.4 Quartiles
12-Sep 3.5 5-Number Summary and Boxplots
13-Sep Project 6: Randomization Test for Two Means
14-Sep 4.1 Correlation
15-Sep Project 7: Hypothesis Test for Correlation

 

Randomization Test for Two Means

Randomization Test for Two Means

This semester I have incorporated a three-pronged strategy in my intro statistics classes:

  1. Flip the classroom – having students learn material at home
  2. Make the classroom more engaging – using more group activities and Learning Catalytics sessions
  3. Focus on early inferential statistics early and often

This week began with students learning about the 5-number summary and how to create a boxplot. On Monday we did an activity where students compared two samples of quantitative data in an effort to determine whether the population means were different. That was a natural lead-in to the randomization test for two means, which is an inferential concept that students can handle early in the semester.

Download the Two Mean Randomization project (pdf) here

This project began with the same data as Monday’s activity, and we found that approximately 5% of the trials produced a mean difference at least as extreme as the observed difference between the samples. Most students were slightly below 5% and were able to conclude that there was a difference between the two population means. A few students (and myself) came in at 5% or slightly higher and were not able to conclude that the population means were different (it was plausible that the two population means were equal). This led to a great discussion of how our results varied and the implications of that.

2_mean_randomizationI followed up with two other investigations. The first compared Overall Quality ratings from ratemyprofessors.com for mathematics instructors and English instructors. The second compared Overall Quality ratings for instructors at my college and nearby Fresno State. Students gathered all of the data before showing up for class.

Now that we have discussed many inferential concepts we will write up our first formal hypothesis test tomorrow when trying to determine whether a linear relation exists between two variables. Look for that blog post later this week.

Comparing Two Samples (Quantitative)

Comparing Two Samples (Quantitative)

My students are wrapping up the part of the course where we cover descriptive statistics. I gave them two sets of data (test scores from two different versions of the same exam) and they spent the day in class computing sample statistics and creating graphs for each sample. Their overall goal was to analyze their results and determine whether there was a significant difference between the two versions or not.

Download a pdf of this activity

Students compared measures of central tendency and the 5-number summaries and I asked them to share their observations. They went on to compute measures of dispersion and then we talked about whether the dispersion of each sample was similar. Finally they created histograms, pie charts, and boxplots and we discussed what they felt the graphs were telling them.

We had another great opportunity to discuss the fact that a perceived difference may not be significant unless we can determine whether the observed difference (the means were 4.8 points apart) would be unusual through some sort of repeated sampling.

This leads into our sixth project of the semester where we will use the randomization test for two means to determine whether the observed difference was significant. We will use StatCrunch for this test, although there are many other tools out there that can be used. My students will then move on to apply this test to two sets of data they collected. I will blog about the outcomes of that project in my next post.

Bootstrap – Matched Pairs

Bootstrap – Matched Pairs

This week I began with a bootstrap project for a paired-difference/matched-pairs scenario.

Download a pdf of the Project Here

One of my goals is to get students working with data they have collected, so I had students collect prices for 25 identical items at two stores. We used this for one of the investigations.

Investigation 1

A researched was investigating whether sons are taller than their fathers. My students were provided with 13 matched pairs. I had them find the difference for each pair (d = father’s height – son’s height). They had to determine whether to expect differences that were positive or differences that were negative. This is an important skill when setting up the alternative hypothesis, and I was happy with how my students understood what type of differences to expect.

We applied the bootstrap method to the sample differences, and the results are shown below.

project_5_bootstrap

Since the interval from the 2.5th percentile to the 97.5th percentile contained 0, we were unable to conclude that there is a difference between the heights of fathers and their sons. We had a great opportunity to discuss the implication of 0 being contained in the interval – and my students were able to understand that if a difference of 0 is in the interval then it is possible that there is no difference between the two groups.

Investigation 2

Here my students used their data from the two stores in an effort to support the claim that prices at Store A are lower than they are at Store B. We had some interesting results, including groups that reached 3 different conclusions when comparing Walmart and Target (Walmart is cheaper, Walmart is more expensive, no difference).

Follow Up

The following day in class I included a Learning Catalytics session where students were given various scenarios and intervals and asked for the appropriate conclusion. They proved that they retained their understanding, and several students displayed that by sharing their reasoning with the class after the question was closed for responses.

I am looking forward to the day where we cover the formal (p-value) hypothesis test for paired differences to see how well they understand the big picture.

Game On in Algebra: Unexpected Rewards

Game On in Algebra: Unexpected Rewards

If you think back to some games you have played, what can be more fun than an unexpected reward? Unexpected rewards can be fun AND motivating.

On the day that I pass back the first exam, I walk around with a bag of plastic gold coins. I hand one to each student who earned the full 3 points on the exam. (That means they leveled up by meeting the performance benchmarks on each HW assignment and quiz, and also scored 80% or better on the pencil & paper exam.) The classroom starts to buzz. Students are wondering what’s up with the coins. A couple of students will be saddened to learn that the coins are not chocolate. An occasional student will be saddened to find out that the coin is not actually made of gold, but that’s pretty rare.

When I am done giving the coins to the students I explain that they can turn in their coin to open any one assignment or quiz. My theory is that their performance deserves some benefit, and being able to save yourself from a missed assignment is a nice perk. I will not reopen any assignment unless a student gives me a coin.

Daniel Pink in his book Drive explains that expected rewards can actually “de-motivate” students. (If you haven’t read his book, you need to. It will open your eyes as to how to get students to respond.) I believe that this is true. To avoid this problem, I tell the students that there may be other benefits for students who still have coins left at the end of the semester. This way they are never sure exactly what will happen. And I like it that way.

I just passed back the first elementary algebra exam and will write a new post in which I will discuss how I will proceed from here. Spoiler Alert: I gave out 7 coins in a class of 44 students.

Bootstrap Method – Estimating a Population Mean

Bootstrap Method – Estimating a Population Mean

Last week we did our third project that focuses on introducing inferential statistics earlier in the semester.

Download the Activity (pdf) Here

The bootstrap method repeatedly samples from a sample (with replacement) to help develop an interval estimate of any population parameter. For example, if there is a sample of 10 numerical values we select 10 values (with replacement) and compute the mean of that sample. We then repeat that process for a total of 1000 samples. We can then use the 2.5th and 97.5th percentiles to bootstrap a 95% confidence interval estimate for the population mean.

Investigation 1

Here’s the first example we walked through together, bootstrapping a 95% interval estimate:
A manager of a fast food restaurant devises a new drive-through system that he believes will decrease wait time from the time an order is placed to the time the order is received.  He initiates the new system at his restaurant and measures the wait time for 10 randomly selected orders.  The wait times, in seconds, are provided below.

108.5 67.4 58.0 75.9 65.1
80.4 95.5 86.3 70.9 72.0

Use the bootstrap method to create a 95% confidence interval for the mean wait time for the new system.

Here are the StatCrunch results for bootstrapping 1000 samples.

project3_aThe interval was 69.65, 87.54.

I followed up with an inferential question:
The manufacturer of the system claims that the mean wait time for all customers should be approximately 80 seconds.
Is this value contained inside the 95% interval?
Is the manufacturer’s claim plausible or is it unlikely to be true? Explain your decision.

Investigation 2

I followed up with a second investigation involving the sale prices of beachfront condos, and an inferential question where the claimed population mean fell outside of the interval.

Investigation 3

college_students
How old are students at our college?


I sampled the ages of 126 of my students and we applied the bootstrap method to this sample. I first had the students make their own claim about the mean (and median) age of all students at our college, and they then evaluated their claims using the bootstrapped estimation intervals.

I followed up by giving them the true mean and median age from our college’s information office, and both of these values were outside the estimation intervals. We then had a great chance to discuss the ways my sample could be biased and how the conclusions based on intervals from this sample were unreliable.

All in all, it was a great learning experience and I felt we took one giant step towards understanding the big picture in intro statistics.

Book Review: Teaching with Classroom Response Systems by Derek Bruff

Book Review: Teaching with Classroom Response Systems by Derek Bruff

Just finished reading this book by Derek Bruff (@derekbruff on Twitter), so I thought I’d share what I wrote on Goodreads. (By the way, if you’d like to be reading buddies, here’s my Goodreads profile page.)

Here goes …

****************************

Although some of the technology has really changed since this book was published, I would strongly recommend this book to anyone who plans to incorporate classroom response systems into their teaching. Bruff clearly lays out the pros and cons of different strategies of incorporation, grading schemes, question types, revealing correct answers, …

He does a great job introducing Peer Instruction, and I have been using that strategy in my classroom with great success. I feel that more of my students understand more of the material at this point of the semester than in previous semesters. (I am moving on to Mazur’s Peer Instruction book to go into greater depth on the strategy.)

I also love the concept of Agile Teaching. I love the uncertainty of not knowing which way the class will go, while remaining confident that I can adapt to what I am seeing from my students. I am teaching the same class back-to-back, and have had to focus on different topics with each class. It’s a lot of fun, and it fits in with my belief that we teach our students instead of teaching the material. I now walk into class each day thinking “I Am An Agile Teacher” and I feel so empowered. I should probably put that on a t-shirt.

Note: I am using Learning Catalytics as my classroom response system in my Intro Stats classes, and I am about to start using Plickers in my Elementary Algebra classes.