I am pleased to announce that last Wednesday, Parvaneh Jahani successfully defended her dissertation entitled, “Dynamic Warehouse Optimization Using Predictive Analytics.” Congratulations Parvaneh — Dr. Jahani!
Shortly after I arrived to UofL, I was told that Parvaneh might be interested in working on logistics problems. She had already done some initial work in quality, so I encouraged her to continue with her current area of research and not to change topics. Nevertheless, she insisted that she wanted a topic in logistics, so we began The Search.
Eventually she identified the simple question: “When a slot goes empty in the forward picking area of a warehouse, what should replace it?” The standard answer to that question is, “the same sku that it held before,” but Parvaneh was interested in how the skus in the forward area might change over time as demand changes. For example, perhaps the empty slot would be better used for a new sku experiencing strong demand. How should one make these decisions?
In practice, most warehouses establish a forward area based on historical demand, then managers worry continually if it’s time to “reslot the forward area.” Parvaneh’s research answers this and many related questions. Most important, she develops an algorithm that changes the composition of the forward area continuously, in order to reflect current demand patterns. Papers on these topics are on the way.
I am also pleased to report that Parvaneh has taken a challenging position at Intelligrated, a large material handling systems integrator near Cincinnati, where she is working on several interesting topics, including those related to her dissertation.
In early April (1-3) I will have the opportunity to teach the Engineering the Warehouse short course with John Bartholdi. The course is designed to introduce practitioners to the science of warehousing. John will teach most of the course, including sessions on slotting fast pick areas, bucket brigade order picking, and strategies for cycle counting.
I will lead sessions on warehouse layout (including non-traditional designs such as fishbone, leaf, and chevron), new developments in automation, and deadline driven order fulfillment. Please join us!
Last year I offered my undergraduate Stochastic Operations Research course both in live and video formats. Students could attend the lecture, view a video stream in real time, or choose to watch it later. I wrote about the results here. This year I offered the same course in the same format, with pretty much the same results.
The basic finding from last year was that there is a strong (negative) correlation between a student’s tendency to rely on video and his or her final grade. As the plot above (from last year) shows, A students attended almost every lecture and viewed very little video relative to other students. F students attended very little class and viewed some videos. Their overall exposure to the course material was—one can only fantasize—through reading the book. D students were less so, and C students even less. B students went to class as much as A students, but also viewed quite a few videos.
This year’s results were much the same. Below are class attendance results for the two years. Failing students attended slightly more than 1/3 of the offered classes. A and B students attended about 3/4 of all offered classes, on average. (Class attendance was self-reported by passing around an attendance sheet in each class. Attendance did not affect their grades, so students had no incentive to misreport their attendance.)
Overall class attendance was better this year than last, perhaps because I showed them the results of last year’s study in an attempt to keep them from relying on the videos! Alas, I only partially succeeded….
Overall, students watched significantly less video this year than last year, but students performing poorly (C, D, F) were still the largest consumers. So, a repeat of last year’s lesson: Better students attend class, worse students attend little class and hope the videos will fill in the gap.
I hasten to note that I have no proof of causality here. Does reliance on video cause poor performance, or do students who would have performed poorly anyway happen to view more video? I don’t have a control group to say for sure. I’ll let the reader speculate.
Now, here is the really striking plot, a sum of the previous two:
The plot above is simply the sum of bars in the previous two plots (for 2013 data), meaning the data give an upper bound on the percentage of classes the student groups attended or viewed. On average, the D and F students were exposed to at most 3/4 of the classes—wow. Go figure.
What to make of these experiments? First, the best students come to class. No surprise there. Second—and there is probably a nicer way to say this—poor students just seem to blow off this course. I tell them on Day 1 that this is the most difficult course they will have in the curriculum and that historically more than 15% of students fail, and still—still!—many don’t even bother to attend or view more than 1/4 of the class material! As Woody Allen said, “80 percent of life is just showing up.”
I have these same students this semester, but without video. It will be interesting to see how lack of access to video affects their performance. I am happy to report that class attendance is much higher than last semester. The big question for me in next year’s Stochastic OR class is, “Should I make video available or not?”
Last year I experimented with video-based lectures in my Stochastic Operations Research course for juniors. The results were surprising.
At the beginning of the course, I told the students that each class would be taped and that they were free to attend the lecture, view it on live stream (and ask questions via chat to a TA, who would ask the question on the student’s behalf), or view the stream anytime later. Complete freedom to consume the course material on your own terms—what could be better?
Enrollment was about 100 students, but attendance very quickly shrunk to fewer than 50, and eventually settled in around 30. Some “video students” said they preferred the video stream because they could stop to take notes or “rewind” to listen to something again. Ability to view lectures via the stream was universally appreciated, though many students requested 10 minute lecturettes on specific topics, claiming that hunting through an hour of video to review a particular point was tedious (point well-taken).
I confess that reducing the class size from 100 to 30 was welcomed and intended. My thinking was that the video students would get what they preferred and the attending students would get a much better experience as well. Of course, I much preferred teaching 30 live students to teaching 100.
Our goal was to understand how the video-only students would fare versus their class-attending classmates. Is video as effective as class attendance? What follows is only anecdotal evidence, because we had limited data and (worse) no control group. Nevertheless, there are some interesting observations.
The video system at Auburn allowed us to document the date and time that a student “touched” a lecture video. There was no way to know whether the student viewed the entire lecture online, downloaded it to be watched later, or ignored it entirely. This is a serious limitation in the data. We also collected class attendance by passing out an attendance sheet. Students were told that attendance had no effect on their grades, and that signing in was strictly to help us understand how attendance and video watching affected class performance. We are confident that the class attendance sheets were close to accurate.
We compiled the results at the end of the course. The plots below show the “video ratio” versus “class attendance ratio” for students based on their final grades. A ratio reflects the fraction of offered classes consumed via that medium. For example, a video ratio of 0.5 means that a student touched one half of all videos leading up to a particular exam. Similarly, a class attendance ratio of 0.7 means that a student attended 70 percent of all offered classes leading up to that exam. The five data points correspond to averages for students who finished the course with a particular grade.
The plot above is for the first exam. The x-axis is class attendance ratio; the y-axis is video-watching ratio. So, a student attending every class and watching every video would be at point (1,1). Data on the plot indicate average performance of students earning a certain grade at the end of the course.
Students who finished the course with an A attended more than 80 percent of the live lectures before the first exam on average, but viewed almost no video at all. B-students attended an even higher percentage of classes and viewed an average of one fourth of offered classes via video. C-, D-, and F-students relied more heavily on the video and attended fewer classes.
Data for the second exam shows similar results. A-students attended less class (it was football season and the class met on Friday afternoons, which also affected the results), and still viewed very little video. B-students both attended class and viewed many videos. C-, D-, and F-students bet heavily on the video, but still did not attend class.
Anecdotal evidence suggests that A-students prefer to attend class, and that having once seen the material, they have little need to review it on video. B-students appear to be the workhorses—attending most classes and watching videos to increase their knowledge. Students performing poorly attended significantly less class and seemed more prone to rely on videos to recover. Students who failed didn’t do much of either.
My suspicion is that although watching a video could be more effective than live lecture (due to the ability to stop and rewind), the reality is that it is just too easy to be distracted when viewing video. I wonder how many “video students” tried to multi-task while viewing lectures, and how many walked away to let the dog out or get a drink from the fridge, confident that they “wouldn’t miss anything.”
My conjecture is that attending class has the significant advantage that it captures a student’s complete attention, and that concentration and focus are what is really needed to master difficult material, not the ability to rewind and hear it again.
Part of my recently completed NSF project on Grid-Based Material Handling was to develop a K-12 game to communicate the essence of high-density storage and computational complexity in a game format. The original plan was to build several physical board games (see prototype) and send them to colleagues I thought might be interested, but that was so 2007.
Shortly after I submitted the proposal, Apple announced the iPad and, of course, “that changed everything.” Building an interactive game on a tablet device was clearly the way to go, so we did indeed change everything. An electronic game is infinitely scalable; my board game was not.
Box Rush is an interactive game built (ironically) on the Android platform. You can download it here. We hope to release a version for iOS in the coming months.
Rather than describe the design, I’ll let you download the game to see how intuitive it is. Users can login with their Facebook accounts, and the system maintains global best solutions, so users can see how their solutions compare with other users around the world. Many of the levels are easy, but many are not. We have not computed optimal solutions, so crowdsourcing will have to suffice! I am hopeful that we will be able to develop extensions to this game in the coming year.
I’d like to thank Joseph Shanahan, an undergraduate wireless engineering major here at Auburn. Joseph did an outstanding job on the design and execution of the game. If you are interested in having him develop for you, please see his site LearnJavaFast.com.
Many thanks to the National Science Foundation, which sponsored the research and this outreach effort.