Archive for May, 2011

Tactile 3-D Experiences…

Posted: May 26, 2011 in Uncategorized

I stumbled across …interesting… news yesterday.  Let me share it with you:
http://www.khou.com/news/texas-news/Cat-fight-Student-suspended-for-hanging-stolen-lab-specimen-122595914.html

The gist of it is this: Students thought it would be funny to take the cat they dissected for class and hang it on cars as a prank.  One kid did it to a friend, who retaliated and hung it back on the first kid’s car.

First, just let me say, how funny.  I haven’t heard anything this hilarious since I read American Psycho.  I just love seeing dead animals hanging around.

Second, the school district and its spokesman really need to rethink their position.  The spokesman said, and I quote, “The district is currently looking at software that will mimic a dissection.  The only problem is you don’t get that tactile 3-D experience.”  Apparently, the “tactile 3-d experience is what you get when you hang a cat from a car.

Seriously, who thinks these kids aren’t getting the “benefit” of a real dissection?  So far, what they’ve shown is that they have no respect for the dead and that they have a really poor sense of humor.  I think it may be time to take away that “benefit”…

Unit 7 Research Journal

Posted: May 25, 2011 in Uncategorized

Okay, I haven’t been posting my journals lately, which I’m sure you’ve all been missing, but here’s a new one to keep you happy.  🙂

For this one, I need to search the Internet using the phrase “data collection techniques” and find websites that discussed different types of information gathering: views from people, direct observation of behavior or testing, and physical records.

So here are the sources I found:

First up, MBA-Lectures.com.  According to them, the observation method “gives the opportunity to record..behavior directly.”  They do point out that observation might be more time consuming and costly than other methods, plus it is often subjective because the observer can include their own biases in the results.

Next, OCHA Disaster Response Preparedness Toolkit.  Strangely appropriate since it’s Zombie Awareness Month.  They also discuss observation, but for them, it’s observation of a situation.  They do, however, have a lot of information about interviewing to gather information.  Because they are looking to gather information from a number of people about their needs, this is actually helpful because I would like to be able to interview faculty to get their views.  In something I take as being a good way to get qualitative data, they suggest beginning an interview with general conversation about “life in the area, and about their experiences during the disaster.”  For me, this would be replaced by general conversation about teaching, grading, and assessing students, and then moving onto the actual assignment they gave their students and how they graded and assessed it.

They do also look at physical records.  For their purposes, they look at calendars.  These can help them know what was going on during a given time period; helpful for them since they are looking to get life back to normal, and knowing what was “normal” for that date can help them have a goal to work towards.  Obviously, this isn’t directly related to what I’ll be looking at, but I wanted to mention it because to me it tells me that I should consider what kinds of records I want to look at.  Should I find a way to look at past records of grades?  Perhaps I should include a question in the qualitative section of the study that looks at whether or not faculty members who follow a rubric feel that it is fair and if they graded their students based on what they felt was right as opposed to what the rubric said was?  (I say this because there have been times that I felt a student got too high or low a grade based on a rubric…)

Next up: University of Florida Cooperative Extension Service.  This is a bit old – 1992! – what I found from them is actually quite helpful.  It’s about “selecting a data collection technique.”  It’s really about survey instruments, and it discusses reliability and validity.  I suppose I didn’t have to bring it up here at all, but I wanted to keep it for future reference.

Northern Arizona University also had a great website up.  For “Methods of Data Collection,” they include a list of methods and examples, including documents, observations, surveys, experimental, other field methods (nominal groups, technique, and Delphi), and multimethods approaches.  It let me know that I’m definitely interested in doing a mixed-methods study: quantitative, gathering information from surveys about a grading/assessment experiment, and qualitative, doing interviews of faculty members to understand the whys as well as the whats.

Finally, another surprising source: the Center for Rural Studies.  Their website is all about gathering information about an area and a group of people.  Again, lots of discussion about surveying, observing, and gathering data from collected sources (such as census data, vital statistics, etc.)

So, found me some sources!  What I’m supposed to do now is:

Critically analyze the methods you are considering for your draft proposal and develop the specific details of these methods.

Well, right now, I am going to ask faculty members to use a particular assignment in order to measure course outcomes (that are already in place).  I will create a rubric to assess the assignment based on those course outcomes using Bloom’s Taxonomy.  I will give the faculty members training on how to apply the rubric.  The faculty members will then grade the assignment as they normally would, using letter grades.  Once they finished with the traditional grading, they would then assess the assignment according to the rubric I had given them.

Then I will gather data, finding out what letters grades and what assessment numbers were assigned.  This data can be compared, looking to see if the letter grades reflect the course outcomes achieved (or that were failed to be achieved).

After analyzing the data, I can meet with the faculty members and interview them, asking for information about how they grade and what they think about the grades they assigned versus the outcomes they assessed.

At this point, I’m not sure what I can do to make this better.  I’m open to suggestions, but I like to think I’m on track with it.

References:

http://mba-lectures.com/advance-research-methods/1377/primary-data-collection-methods.html

http://ocha.unog.ch/drptoolkit/PNeedsAssessmentDataCollection.html

http://edis.ifas.ufl.edu/pdffiles/PD/PD01600.pdf

http://www.prm.nau.edu/prm447/methods_of_data_collection_lesson.htm

http://crs.uvm.edu/community_data/data.html

I am a procrastinator.

Posted: May 20, 2011 in Uncategorized

Okay, so maybe I should have more to say beyond that.  Perhaps I should say that I will stop being a procrastinator…some day.  Or maybe I should…nah, I’ll just finish this later.

So, this unit, I need to look at research questions and methods of research.  Fun!

These are my current research questions (subject to change many times over):

  1. Will there be a relationship between assigned grades and course outcome assessments?
  2. What, other than achievement of course outcomes, do instructors use when assigning grades?
  3. Will the use of a course outcome assessment rubric change the grades that teachers assign to student project?

Next up – I will explain why it is appropriate to my specialization, how it will address the problem I stated, and the variables and information I will gather to answer it.  I will also explain and justify my methods of data collection.

First, you need to know my problem statement:

The problem in question is that subjectivity of grades means that they do not accurately provide a measurement of the success of students to reach the course outcomes.  Course outcomes must be assessed separately to ensure that course outcomes are being reached.

Now, after doing some more reading, I think I would like to change my research questions to

  1. What is the relationship between assigned grades and course outcome assessments?
  2. (I’ll keep number 2): What, other than achievement of course outcomes, do instructors use when assigning grades?
  3. Do instructors, when shown the difference between course outcome assessments and grades, choose to change grades and/or their method of grading?  If so, why and how?

And, of course, the questions may still change even more, but for now, I think they are more focused.

So – how is it appropriate to my specialization?  Well, my specialization is adult and post-secondary education.  With SACS (the local regional accrediting association) requiring that colleges find a way to measure whether or not students are learning what they are supposed to be learning, it’s appropriate for me to look at what is being measured and how.  Many colleges do not have a separate assessment for course outcomes, and so if teachers are not measuring course outcomes with their grades, then how is it possible for students to know if they’ve achieved them?  And how is it possible for the school to know if a student has achieved it?

Onto variables!  This is the hard part for me.  I always get them confused.  Independent variables are those that cause, influence or affect outcome; they are the antecedent.  Dependent variables are those that depend on the independent variables; they are the outcome.  Maybe this means that my independent variables would be the assignment that the instructor would use to measure the course outcome and the rubric that is used to assess that course outcome?  But I’m not sure, and I’m not sure what the independent variable would be.  Help anyone?

Finally, information I’ll gather.  This is, I hope, an easier one.

For the first question – what is the relationship between assigned grades and course outcome assessments? – I’ll be asking instructors to grade an assignment both on their own and using an assigned rubric that will measure the course outcomes.  I will record both sets of numbers and then determine if there is a relationship between them.  Do high grades equal high achievement of course outcomes?  (I’m thinking that isn’t necessarily so…)

For the second question – What, other than achievement of course outcomes, do instructors use when assigning grades? – I’ll be asking the instructors to discuss how they grade students.  If they’ve used a rubric of their own, I will want to know that and see it.  If they use other influences, such as the instructor who once told me I should give “A for effort,” then I will hope that they are honest and tell me that, too.

For the third and final question – do instructors, when shown the difference between course outcome assessments and grades, choose to change grades and/or their method of grading?  If so, why and how? – I will present the results of the grades to the instructors, letting them know the differences that exist between measuring course outcome and grading an assignment.  Then I will ask them – again, assuming honesty – whether or not they will change the way they grade, and if so, how they will make that change.

Of course, at this point, this is all theoretical, and I haven’t even gotten feedback on the project I turned in last week, so we’ll see what my instructor has to say…