GROUP+A

Image courtesy of Stock photography

Marisa Mathews-Michaels
=Nicolette Taylor =

Shawnette Williams

 * Abstracts ** - An abstract is sometimes used to give a brief overview of the intended content of a written discourse. An abstract may address the reason and intended outcomes of the body of work. An abstract may be written by someone promoting, examining or studying someone else’s work. The author of the abstract tries to capture the essence of the work. This is an important part of academic work and research and may be used to give credence to the work. The teacher may guide students into learning how to gather important information about a work from the abstract. The abstract could be used as a brainstorming activity to predict outcomes.

==A **Bell-shaped distribution** is a “symmetrical curve representing the normal distribution” (615 Glossary). It means that most of the scores will be between 25 and 75, with the most at 50. The rest of the scores will fall neatly into the beginning or end of the curve on each side. If a set of scores forms a bell curve, it becomes easier to look at standard deviations (“how scores vary around the mean” – which would be the top of the bell curve, meaning 50) (Bracey, 2006, p. 47). It is nice, statistically, to work with a bell-shaped distribution, but it does not always match up with grading, because it means that half of your students would earn a 50 or below (some college classes actually curve their grades based on this, meaning there are very few A’s). For data like this, we hope to have a negatively skewed distribution (one that has more scores in the 70 and above range). However, plotting student scores in a curve is extremely useful for understanding where students are and how push to them higher in the curve. == == **Central tendency ** is the average (mean, median, or mode) value of any distribution of data. “Although frequency distributions show the patterns in scores, it is useful to summarize the performance of a group using a single score for the typical or average performance of a group. These are what researchers call measures of central tendency, and the three most common are the mode, mean, and median” (Lodico, M.G., Spaulding, D.T., & Voegtle, K.H., 2010). The mode is the score in a distribution that occurs most often. The mean is the mathematical average of set of scores. The median is the score that divides a distribution exactly in half when scores are arranged from the highest to the lowest. It is the middle of distribution. == == **Correlation coefficient- **The correlation coefficient plots how one variable varies as another variable also varies (Bracey, 2006, p. 74). For this reason, correlation does not equal causation. Correlation coefficients can range from –1.0 to +1.0. Because coefficients are the basis of most research, they are important for correctly interpreting educational research as well. Bracey’s example (p. 74) is a good demonstration of this: “If children’s test scores correlate positively with parents’ educational level, as the educational level goes up, the test scores go up.” == == **Criterion-Referenced Test ** “refers to a test that measures whether a student has reached a pre-established passing level, often called a cut score”( Boudett, K.P., City, E.A., & Murnane, R.J., p 39).  “ Criterion-referenced testing, unlike norm-referenced testing, uses an objective standard or achievement level. An evaluee is required to demonstrate ability at a particular level by performing tasks at that degree of difficulty. Scores on criterion-referenced tests indicate what individuals **//can //** do — not how they have scored in relation to the scores of particular groups of persons, as in norm-referenced tests. Criterion-referenced testing avoids all this confusion. Concrete criteria are established and the individual is challenged to meet them” (Valpar International Corporation, 2011). == == **<span style="font-family: Cambria,serif; font-size: 10pt;">Dependent variabl **<span style="font-family: Cambria,serif; font-size: 10pt;">e - Factor or phenomenon that is changed by the effect of an associated factor or phenomenon called an independent variable. The dependent variable is something that depends on other factors. For example, a test score could be a dependent variable because it could be affected by attendance or time spent studying. The independent variable’s association with the dependent variable may change under many conditions. == == **<span style="font-family: Cambria,serif; font-size: 10pt;">Experimental research - **<span style="font-family: Cambria,serif; font-size: 10pt;"> According to Ross and Morrison (1997), experimental research in educational psychology began around 1900. “The experimenter’s interest in the effect of environmental change...demanded designs using standardized procedures to hold all conditions constant except the independent (experimental) variable” (p. 1021). Due to the fact that all variables should be controlled besides the independent variable, most educational research is termed “quasi-experimental research.” It is difficult to control all variables, especially in a classroom. Teachers should remember that even correlations shown in studies may not apply to their classroom due to extenuating variables that were not present in the original study. == == **<span style="color: black; font-family: Cambria,serif; font-size: 10pt;">Formative Research Methods **<span style="color: black; font-family: Cambria,serif; font-size: 10pt;"> - are pre- or ongoing research processes implemented to validate if there is improvement between the dependent variables and the independent variables. Reigheluth and Frick (1996) in a formative research study examine how to improve design theories in education. They examine the generalized theories which are conducted through three criteria; “effectiveness, efficiency and appeal.” They posit that instead one should consider conducting formative research by the environment - “situationality” - and whether research used in one design may be used in another design, “replication” (Reigheluth and Frick, 1996). A formative research method may be utilized to assess how effectively students use the target language to discuss issues affecting young people by comparing their way of life and the lifestyles of youth in Spanish-speaking or French-speaking countries. == == **<span style="font-family: Cambria,serif; font-size: 10pt;">Independent variable **<span style="font-family: Cambria,serif; font-size: 10pt;"> - Factor or phenomenon that causes or influences another associated factor or phenomenon called a dependent variable. The independent variable stands alone and is not changed by any other variables being measured. For example, a student’s grade level might be an independent variable. Other factors, such as attendance or test scores, will not change a student’s grade level. In a research study, the independent variable defines the primary focus of research interest. The independent variable's association with the dependent variable may change under many conditions. == == **<span style="color: black; font-family: Cambria,serif; font-size: 10pt;">Norm-referenced tests **<span style="color: black; font-family: Cambria,serif; font-size: 10pt;"> “compare an individual's performance to the performances of a group, called the “norm group.” For the results to be meaningful, it is necessary to know the specific composition of the norm group. For example, if an individual scores at the 87th percentile on a math test, what can you say about his or her math abilities? Nothing, until you know something about the norm group. One conclusion will be reached if the norm group is a collection of fourth graders. An entirely different conclusion will be reached if the norm group is a collection of university seniors majoring in physics” <span style="font-family: Cambria,serif; font-size: 10pt;">(Valpar International Corporation, 2011) <span style="color: black; font-family: Cambria,serif; font-size: 10pt;">. Norm-referenced tests “are tests that are designed to describe performance - often the performance of individual students, but in some cases that of schools, districts, states, or even countries-in terms of a distribution of performance” (Boudett, City, & Murnane, 2008, p. 37). == == **<span style="color: black; font-family: Cambria,serif; font-size: 10pt;">Raw Score **<span style="color: black; font-family: Cambria,serif; font-size: 10pt;">- The raw score is a numerical or letter grade achieved from a test. It is evaluated to determine the proficiency level of the individual on a given assignment. Raw score is relevant to education as it helps to define how much knowledge a student has acquired from a topic or concept. The raw score is assessed to the average, median and mode when doing statistics for analyzing grades in relation to performance levels. The teacher may use the raw score to assess the strengths and weaknesses of the students. In doing so, the teacher will know how to modify the lesson to make it more creative, challenging or manageable for the students. The teacher may also use the raw scores to implement strategies to improve the students’ language learning skills - reading, listening, speaking, writing and interpreting. == ==** <span style="color: black; font-family: Cambria,serif; font-size: 10pt;">Reliability ** <span style="color: black; font-family: Cambria,serif; font-size: 10pt; font-weight: normal; line-height: 19px;">refers to the “consistency and validity of test results determined through statistical methods after repeated trials” (615 Glossary). You aim to have a reliable measure when testing, one that will give you close to the same answer every time it is tested (Boudett, City, & Murnane, 2008, p. 33). There are three types of reliability evidence: stability/test-retest (consistency when testing the same test more than once), alternate form (consistent results between two or more forms of a test), and internal consistency (“consistency in the way an assessment instrument’s items function”) (Popham, 2011, p. 62). In general, “the greater the measurement error, the lower the reliability” (Boudett, et. al., 2010, p. 33). This is important for all teachers to be aware of because it means that you cannot necessarily judge a student’s abilities from one test. <span style="font-family: Impact,Charcoal,sans-serif;"> <span style="font-family: Cambria,serif; font-size: 13px; font-weight: normal; line-height: 19px;">It is also important to think about reliability of assessment in terms of more than just paper-and-pencil tests. Draves (2009) tested the reliability of portfolio assessments on pre-service music teachers, and found that they can be reliable measures of a student’s ability, but that expectations must be understood by all parties and self-assessments of student work should be emphasized. It may be helpful to combine traditional assessments with portfolio/performance assessments to have the most reliable data possible. == == **<span style="color: black; font-family: Cambria,serif; font-size: 10pt;">Standard Deviation **<span style="color: black; font-family: Cambria,serif; font-size: 10pt;">“provides information about how much scores vary around the mean. A large standard deviation indicates they are more tightly bunched close together” (Bracey, 2006, p. 47). “The standard deviation is a measure of how spread out your data is. Computation of the standard deviation is a bit tedious” (The Children’s Mercy Hospital, 2002). == == **<span style="color: black; font-family: Cambria,serif; font-size: 10pt;">Validity ** <span style="color: black; font-family: Cambria,serif; font-size: 10pt;">is the “degree to which an instrument, selection process, statistical technique, or test measures what it is supposed to measure” (615 Glossary). Popham (2011) states that “[t]ests, themselves, do not possess validity. Educational tests are used so educators can make inferences about a student’s status” (p. 85). Validity of educational assessments can be based on content (how well the assessment represents the curricular aim it is trying to measure), criterion (how well the assessment predicts a student will do on an external criterion), or construct (if empirical evidence can confirm a construct exists and that the assessment measures this correctly) (p. 89). Teachers should be creating assessments that correlate with their content standards and learning objectives. If a teacher gives assessments that are not relevant to what he/she previous taught his/her students or are not in line with students’ learning objectives, the assessment cannot be said to be valid. This means that the results are essentially void because the teacher will be unable to make accurate future instructional decisions about his/her students. ==
 * <span style="font-family: Cambria,serif; font-size: 10pt;">Andragogy **<span style="font-family: Cambria,serif; font-size: 10pt;"> is “the art and science of helping adults learn…” (Henschke, 2011, p. 34). Helping adults learn can apply to more than college-level classes. Teachers and administrators need to learn constantly from their past students, current students, and from students at other schools. Learning is always an ongoing process, especially for those in the education profession.

<span style="font-family: Cambria,serif; font-size: 10pt;">References

<span style="font-family: Cambria,serif; font-size: 10pt;">Boudett, K.P., City, E.A., & Murnane, R.J. (2008). //Data wise: A step-by-step guide to using assessment results to improve teaching and learning.// Cambridge, MA: Harvard Education Press.

<span style="font-family: Cambria,serif; font-size: 10pt;">Bracey, G. (2006). //Reading educational research: How to avoid getting statistically snookered.// Portsmouth, NH: Heinemann.

<span style="font-family: Cambria,serif; font-size: 10pt;">Draves, T.J. (2009). Portfolio assessment in student teaching: A reliability study. //Journal of Music Teacher Education, 19//(1), 25-38. doi: 10.1177/1057083709343906.

<span style="font-family: Cambria,serif; font-size: 10pt;">Henschke, J. A. (2011). Considerations regarding the future of andragogy. //Adult Learning,// //22//(1), 34-37.

<span style="font-family: Cambria,serif; font-size: 10pt;">Lodico, M.G., Dean, D.T., & Voegtle, K.H. (2010). //Methods in educational research from theory to practice. San// Francisco, CA: Jossey-Bass. Retrieved from [].

<span style="font-family: Cambria,serif; font-size: 10pt;">Popham, W.J. (2011). //Classroom assessment: What teachers need to know.// Boston, MA: Pearson Education, Inc.

<span style="color: black; font-family: Cambria,serif; font-size: 10pt;">Reigeluth, C. & Frick, T. (1996). A methodology for creating and improving design theories.Retrieved from [].

<span style="font-family: Cambria,serif; font-size: 10pt;">Ross, S. M. & Morrison, G. (1997). Experimental research methods. //Getting started in//

//<span style="font-family: Cambria,serif; font-size: 10pt;">instructional technology research, //<span style="font-family: Cambria,serif; font-size: 10pt;">1021-1043. Retrieved from http://www.aect.org/edtech/ed1/38.pdf.

<span style="font-family: Cambria,serif; font-size: 10pt;">The Children’s Mercy Hospital (2002). //What is a standard deviation?// Retrieved from [].

<span style="font-family: Cambria,serif; font-size: 10pt;">Valpar International Corporation (2011). //What is criterion-referenced testing?// Retrieved from [].