Stress Analysis

Read Complete Research Material

STRESS ANALYSIS

Stress Analysis

Stress Analysis

Section A: Linear Regression

Stress and symptoms

Wagner et al (1988) hypothesized that individuals, who experience more stress, as assessed by a measure of daily hassles, will exhibit higher levels of symptoms than those with a lower level of stress. The following data base has 2 variables: stress (daily hassles) and symptoms. Following on from Wagner's prediction we are interested in assessing the impact of stress on symptoms.

Question 1

Provide a non-directional alternative hypothesis for these data

Stress (Daily Hassles) has considerable impact on symptoms.

Provide the null hypothesis for these data

There is a significant relationship between stress and symptoms

What sort of data are both the Stress and Symptoms?

Both the stress and symptoms are scale data

Question 2

Correlation:

Correlations

Hassles

Symptoms

Support

Hassles

Pearson Correlation

1

.613**

-.112

Sig. (2-tailed)

.000

.369

N

66

66

66

Symptoms

Pearson Correlation

.613**

1

-.130

Sig. (2-tailed)

.000

.297

N

66

66

66

Support

Pearson Correlation

-.112

-.130

1

Sig. (2-tailed)

.369

.297

N

66

66

66

**. Correlation is significant at the 0.01 level (2-tailed).

Summarise the correlation using APA format [r = .613, p = .000]

The Pearson r value is .613, it means that changes in one variable are strongly correlated with changes in the second variable. This also means that as one variable increases in value, the second variable also increase in value. Similarly, as one variable decreases in value, the second variable also decreases in value. This is called a positive correlation, although support is not related to symptoms.

Question 3

Run a Linear Regression. Paste all your output here:

Model Summary

Model

R

R Square

Adjusted R Square

Std. Error of the Estimate

1

.616a

.379

.359

16.225

a. Predictors: (Constant), Support, Hassles

ANOVAb

Model

Sum of Squares

df

Mean Square

F

Sig.

1

Regression

10129.229

2

5064.614

19.238

.000a

Residual

16585.802

63

263.267

Total

26715.030

65

a. Predictors: (Constant), Support, Hassles

b. Dependent Variable: Symptoms

Coefficientsa

Model

Unstandardized Coefficients

Standardized Coefficients

t

Sig.

B

Std. Error

Beta

1

(Constant)

77.094

7.768

9.925

.000

Hassles

.101

.017

.606

6.062

.000

Support

-.148

.238

-.062

-.623

.536

a. Dependent Variable: Symptoms

Summarise the regression analysis using APA format [r2..379; F 19.238(2, 63) = p =.000 and .536;

Regression eq is

Ý=.101 hassles - .148 support + 77.94

Casual observation would suggest that there are significant differences between the four sample means, with samples 1 and 2 having lower values than samples 3 and 4. Each sample has the same variance and therefore the average variance within a sample is 5.5. However, the variance of the whole data set (s2 = 17.79) is much higher, because of the difference between the average size of the observations in some samples compared to others. This is how ANOVA compares several sample means. The statistical test used in ANOVA is the F test, used as a one-tailed test. The test is analogous to a z(d) test or t test used to compare the difference between just two means, and in fact an ANOVA carried out on two samples will indicate the same significance (Howell, 2006, 11).

With the two sample tests, a measure of the difference between the two means is divided by an appropriate measure of variation to obtain the test statistic (z or t). With ANOVA the appropriate measure of difference between several means is not a simple difference but the sum of squares of the differences between individual sample means and the overall mean.

This sum of squares (SStreatment), is corrected for the number of means, by dividing by n - 1, to give a mean-square or variance (MStreatment). If there was no difference between the several sample means, this MStreatment would ...
Related Ads