The Biennial Conference for Chemical Education - An Overview

Posted on Sunday, September 2nd, 2018

Written by Mike Moore, PhD Student, Department of Chemistry

The following post was written by Mike Moore, who recently received a PSEER Travel Grant to attend the Biennial Conference for Chemical Education (BCCE). 


Hi all!  I’m Mike Moore, a PhD student working with Dan Thomas (Department of Chemistry at the University of Guelph) and studying chemical education. Thanks to the PSEER travel grant, I was able to attend the Biennial Conference for Chemical Education (BCCE). I went in with a few goals and sub goals:

  1. Learn about topics broader than my research to help me better understand the general state of the field 
    1. Be able to describe several common theoretical frameworks tangential to my work
    2. Be able to explain the pros and cons of particular statistical methods that I use against ones that are somewhat peripheral to my work
  2. Network with people in similar and different areas of chemical education research (CER)
    1. Eat lunch with different people each day
    2. Chat with people while waiting for sessions to start (get the elevator pitch down!)
  3. Find out more about how standardized tests are developed and evaluated
    1. Attend the workshop on ACS test development
    2. Talk to some people who design ACS tests about their methodology

Based on these metrics, and some of the goals that emerged throughout the conference, I consider it a success!  Below I outline some of the professional and personal highlights of the conference.

The first evening featured a keynote about the history of the BCCE and the field of chemical education in general. It took place in one of the most interesting keynote settings I’ve ever seen: a hockey rink (see Figure 1). I had a couple interesting takeaways from the keynote: first, that the subfields in chemistry (organic, inorganic, physical, analytical) used to be even more siloed than they seem to be today; of course, even then there was commentary that this had to stop.  My second interesting takeaway was the awareness of the bias of genders in general educational (and chemical) research is not new.  Gender discrimination/bias was already a hot topic in the first few BCCEs (1970) and was a topic of editorial columns prior to that.  It can be easy to forget how far back certain ideas and movements go (Figure 2!).

Image of a hockey rink.

Figure 1: Location of the first keynote - a hockey rink.

I attended many technical sessions; too many to summarize here. Nonetheless, I’ll mention some highlights:

  • In solving complex problems, changing strategies was almost always preceded by reflection.  Sometimes these strategy changes were useful, others detrimental. (J.G. Rodriguez et. al.)
  • An index to score the quality of multiple choice questions was developed based on the breadth of literature about common errors (J.B. Breakall et.al.).  I wonder if the score on this index would correlate to students’ perceptions of exam fairness?
  • The Learning About Theoretical Frameworks in Chemistry Education Research was also particularly interesting.  Hosted by some of the heavy hitters in the field (M. Orgill and G.M. Bodner), they illustrated how a theoretical framework shapes one’s research questions and methods of data collection

These technical sessions allowed me to give an unreserved checkmark to goal 1: learning about broader topics.

Image of an old newspaper clipping from August 14, 1912, that discusses climate change

Figure 2: A reminder that some ideas that seem modern have old roots.

I attended two workshops (in-depth sessions lasting 3 hours): An ACS Exams Committee Experience: Developing a Test Specification and Writing and Editing Items, as well as Scale Interventions: Their Adaptation into a Class and Measuring Their Effect.  Both of these workshops were largely the work of Prof Kristen Murphy: she was the workshop facilitator for the former and one of her master's students facilitated the latter.

There were two highlights of the ACS Exams workshop.  First was how much thought is put into how much to weight each area of content knowledge on the exams.  This was the very first step of the ACS exam design process and was handled in far more rigour than I’d seen before.  The second highlight was the method used to determine the complexity of each question.  I’ve recreated it roughly below:

  1. Identify each different step required in solving the problem (e.g. convert to moles, use stoichiometric ratio, convert to mass).
  2. Use the following scoring system to assign a difficulty rating to each component:
    1. For the first easy component, +1
    2. For the first medium-difficulty component, +2
    3. For the first hard component, +4
    4. For each additional component, regardless of difficulty, +1
    5. Interactivity rating: assign a difficulty to putting the steps together (+1 to +3)
      --and for multiple choice questions--
    6. Do the distractors require evaluation between them (+2) or do they require elimination (+1)?
  3. Add all the values of 2 together to come out with the difficulty of the question.

The most interesting steps in that process, in my opinion, were 2.5 and 2.6.  In my experience, when we evaluate the problems we ask students, we often use the number of steps as a proxy for difficulty instead of thinking about all aspects of the question.

The Scale Interventions workshop was also fascinating.  I’d been to an earlier iteration of this workshop two years ago at the last BCCE, so while the introduction wasn’t too exciting for me, the rigour that they used to evaluate their concept inventories was fantastic.  The general flow of test evaluation went as follows:

  • Design test question through normal means
  • Have students write the test, then interview to find:
    • If students could get the correct answer using incorrect reasoning
    • If students could feasibly get an incorrect answer using correct reasoning
    • If students could get the correct answer based on non-content knowledge (e.g. a suspicious pattern in the answer choices)
  • Revise the questions, repeat

The systematic rigor that this group put their tests through was fantastic.  I find that this kind of testing is often considered (though usually, only in part) but seldom executed.  Goal 2, check!

Overall, I enjoyed the conference very much and felt that I got a lot out of it, both academically and socially.  It was great to talk to people in many different fields with hugely varying experiences: some new and veteran chemical education researchers, high school teachers doing research on the side, ACS education specialists, and more.  I learned about their experiences and backgrounds by, you guessed it, chatting before sessions and eating lunch with new people each day.  Goal 3, check!

News Archive