Average hourly earnings in the U.S. retail trade industry (in current dollars and constant dollars) are shown in the table.
|
Year |
1990 |
1995 |
2000 |
2002 |
2003 |
|
Current dollars |
4.88 |
5.94 |
6.75 |
7.13 |
7.29 |
|
Constant dollars |
5.70 |
5.39 |
5.07 |
5.00 |
4.97 |
a. Define the terms current dollars and constant dollars. You will have to
look these terms up
b. Find the least squares regression line that approximates the average hourly earnings in both current dollars and constant dollars for this industry. Find the correlation coefficient in both cases. Comment on the meanings of the correlation coefficients.
c. Find where the two regression lines that you obtained in part b intersect. What does this point mean?
d. What are the slopes of the regression lines? What do they mean? (Use correct units.) What does this say about the long-term prospects of retail trade industry employees?
e. Use the regression lines to estimate the difference in current dollar and constant dollar hourly earnings in the year 2005.
Graph both of the regression lines on the same set of axes. (in Excel)
If you were a union negotiator for employees in the retail trade industry, how would you use this information?
In: Statistics and Probability
Case Study
When Jack Welch assumed the top position at General Electric in 1981, he inherited a company that had a market value of $12 billion — certainly a modest number, by today’s standards. By the time he left in 1998, GE was worth $280 billion.While leading GE, Welch was charged with the task of making the conglomerate better by any means necessary. With his gut telling him that his company was due for a complete overhaul, Welch decided to implement Six Sigma at GE in 1995.
Six-Sigma is a methodology that aims to reduce defects and errors in all processes, including transactional processes and manufacturing processes. Organizations that use Six Sigma test their processes again and again to make sure that they are as close to perfect as possible.Five years after Welch’s decision to implement Six Sigma, GE had saved a mind-blowing $10 billion.
Welch claimed to have spent as much as half of his time working on people issues.By assembling the right team and ingraining them with the right management philosophies, Welch successfully oversaw the transformation of GE from a relatively strong company to a true international juggernaut.
Questions:
Important Points:
In: Operations Management
Please Use R studio and show all the steps to answer this question
NY Marathon 2013 the table below shows the winning times (in minutes) for men and women in the new york city marathon between 1978 and 2013. (the race was not run in 2012 because of superstorm sandy.) assuming that performances in the big apple resemble performances elsewhere, we can think of these data as a sample of performance in marathon competitions. Create a 90% confidence interval for the mean difference in winning times for male and female marathon competitors.
|
Year |
Men |
Women |
Year |
Men |
Women |
|
1978 |
132.2 |
152.5 |
1996 |
129.9 |
148.3 |
|
1979 |
131.7 |
147.6 |
1997 |
128.2 |
148.7 |
|
1980 |
129.7 |
145.7 |
1998 |
128.8 |
145.3 |
|
1981 |
128.2 |
145.5 |
1999 |
129.2 |
145.1 |
|
1982 |
129.5 |
147.2 |
2000 |
130.2 |
145.8 |
|
1983 |
129.0 |
147.0 |
2001 |
127.7 |
144.4 |
|
1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 |
134.9 131.6 131.1 131.0 128.3 128.0 132.7 129.5 129.5 130.1 131.4 131.1 |
149.5 148.6 148.1 150.3 148.1 145.5 150.8 147.5 144.7 146.4 147.6 148.1 |
2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 |
128.1 130.5 129.5 129.5 130.0 129.1 128.7 129.3 128.3 125.1 Cancelled 128.4 |
145.9 142.5 143.2 144.7 145.1 143.2 143.9 148.9 148.3 143.3 Cancelled 140.1 |
In: Statistics and Probability
Please Use R studio to answer this question
NY Marathon 2013 the table below shows the winning times (in minutes) for men and women in the new york city marathon between 1978 and 2013. (the race was not run in 2012 because of superstorm sandy.) assuming that performances in the big apple resemble performances elsewhere, we can think of these data as a sample of performance in marathon competitions. Create a 90% confidence interval for the mean difference in winning times for male and female marathon competitors.
|
Year |
Men |
Women |
Year |
Men |
Women |
|
1978 |
132.2 |
152.5 |
1996 |
129.9 |
148.3 |
|
1979 |
131.7 |
147.6 |
1997 |
128.2 |
148.7 |
|
1980 |
129.7 |
145.7 |
1998 |
128.8 |
145.3 |
|
1981 |
128.2 |
145.5 |
1999 |
129.2 |
145.1 |
|
1982 |
129.5 |
147.2 |
2000 |
130.2 |
145.8 |
|
1983 |
129.0 |
147.0 |
2001 |
127.7 |
144.4 |
|
1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 |
134.9 131.6 131.1 131.0 128.3 128.0 132.7 129.5 129.5 130.1 131.4 131.1 |
149.5 148.6 148.1 150.3 148.1 145.5 150.8 147.5 144.7 146.4 147.6 148.1 |
2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 |
128.1 130.5 129.5 129.5 130.0 129.1 128.7 129.3 128.3 125.1 Cancelled 128.4 |
145.9 142.5 143.2 144.7 145.1 143.2 143.9 148.9 148.3 143.3 Cancelled 140.1 |
In: Statistics and Probability
Barbara Lynch, the product manager for a line of skiwear produced by HeathCo Industries, has been working on developing sales forecasts for the skiwear that is sold under the Northern Slopes and Jacque Monri brands. She has had various regression-based forecasting models developed. Quarterly sales for 1988Q1 through 1997Q4 are as follows:
| Sales | ||||
| Year | Q1 | Q2 | Q3 | Q4 |
| 1988 | 72,962 | 81,921 | 97,729 | 142,161 |
| 1989 | 145,592 | 117,129 | 114,159 | 151,402 |
| 1990 | 153,907 | 100,144 | 123,242 | 128,497 |
| 1991 | 176,076 | 180,440 | 162,665 | 220,818 |
| 1992 | 202,415 | 211,780 | 163,710 | 200,135 |
| 1993 | 174,200 | 182,556 | 198,990 | 243,700 |
| 1994 | 253,142 | 218,755 | 225,422 | 253,653 |
| 1995 | 257,156 | 202,568 | 224,482 | 229,879 |
| 1996 | 289,321 | 266,095 | 262,938 | 322,052 |
| 1997 | 313,769 | 315,011 | 264,939 | 301,479 |
a) Prepare a time-series plot of the data, and on the basis of what you see in the plot, write a brief paragraph in which you explain what patterns you think are present in the sales series.
b) Smooth out seasonal influences and irregular movement by calculating the center moving averages. Add the centered moving averages to the original data you plotted in part a. Has the process of calculating center moving averages been effective in smoothing out the seasonal and irregular fluctuations in the data? Explain.
c) Determine the degree of seasonality by calculating seasonal indexes for each quarter of the year.
d) Develop a forecast for Ms Lynch for the four quarters of 1998.
In: Statistics and Probability
3300 Econometric HW
| obs | RWAGES | PRODUCT |
| 1959 | 59.87100 | 48.02600 |
| 1960 | 61.31800 | 48.86500 |
| 1961 | 63.05400 | 50.56700 |
| 1962 | 65.19200 | 52.88200 |
| 1963 | 66.63300 | 54.95000 |
| 1964 | 68.25700 | 56.80800 |
| 1965 | 69.67600 | 58.81700 |
| 1966 | 72.30000 | 61.20400 |
| 1967 | 74.12100 | 62.54200 |
| 1968 | 76.89500 | 64.67700 |
| 1969 | 78.00800 | 64.99300 |
| 1970 | 79.45200 | 66.28500 |
| 1971 | 80.88600 | 69.01500 |
| 1972 | 83.32800 | 71.24300 |
| 1973 | 85.06200 | 73.41000 |
| 1974 | 83.98800 | 72.25700 |
| 1975 | 84.84300 | 74.79200 |
| 1976 | 87.14800 | 77.14500 |
| 1977 | 88.33500 | 78.45500 |
| 1978 | 89.73600 | 79.32000 |
| 1979 | 89.86300 | 79.30500 |
| 1980 | 89.59200 | 79.15100 |
| 1981 | 89.64500 | 80.77800 |
| 1982 | 90.63700 | 80.14800 |
| 1983 | 90.59100 | 83.00100 |
| 1984 | 90.71200 | 85.21400 |
| 1985 | 91.91000 | 87.13100 |
| 1986 | 94.86900 | 89.67300 |
| 1987 | 95.20700 | 90.13300 |
| 1988 | 96.52700 | 91.50600 |
| 1989 | 95.00500 | 92.40800 |
| 1990 | 96.21900 | 94.38500 |
| 1991 | 97.46500 | 95.90300 |
| 1992 | 100.00000 | 100.00000 |
| 1993 | 99.71200 | 100.38600 |
| 1994 | 99.02400 | 101.34900 |
| 1995 | 98.69000 | 101.49500 |
| 1996 | 99.47800 | 104.49200 |
| 1997 | 100.51200 | 106.47800 |
| 1998 | 105.17300 | 109.47400 |
| 1999 | 108.04400 | 112.82800 |
| 2000 | 111.99200 | 116.11700 |
| 2001 | 113.53600 | 119.08200 |
| 2002 | 115.69400 | 123.94800 |
| 2003 | 117.70900 | 128.70500 |
| 2004 | 118.94900 | 132.39000 |
| 2005 | 119.69200 | 135.02100 |
| 2006 | 120.44700 | 136.40000 |
Problem 2.
Use the data in the “Autocorrelation” tab to test
For Autocorrelation using the Durbin Watson Test
Graph the Residuals and determine whether they are distributed normally or whether they are biased
In: Math
<< UNIX >>>
BY this file contains
ELMER SOLVER (v 8.4) STARTED AT: 2020/03/18 13:04:22
ParCommInit: Initialize #PEs: 1
MAIN:
MAIN:
=============================================================
MAIN: ElmerSolver finite element software, Welcome!
MAIN: This program is free software licensed under (L)GPL
MAIN: Copyright 1st April 1995 - , CSC - IT Center for Science
Ltd.
MAIN: Webpage http://www.csc.fi/elmer, Email [email protected]
MAIN: Version: 8.4 (Rev: unknown, Compiled: 2020-02-28)
MAIN: Running one task without MPI parallelization.
MAIN: Running with just one thread per task.
MAIN: HYPRE library linked in.
MAIN: MUMPS library linked in.
MAIN:
=============================================================
MAIN:
MAIN:
MAIN: -------------------------------------
MAIN: Reading Model: case.sif
LoadInputFile: Scanning input file: case.sif
LoadInputFile: Loading input file: case.sif
WARNING:: LoadInputFile: There are no BCs in the system!
LoadInputFile: Number of Body Forces: 1
LoadInputFile: Number of Initial Conditions: 0
LoadInputFile: Number of Materials: 1
LoadInputFile: Number of Equations: 1
LoadInputFile: Number of Solvers: 1
LoadInputFile: Number of Bodies: 1
Loading user function library:
[HeatSolve]...[HeatSolver_Init0]
LoadMesh: Base mesh name: ./.
LoadMesh: Elapsed REAL time: 0.5294 (s)
Question: I want to create new file that just print between "==" by sing Unix/Linx
this code isn't complete because the output is not correct
#!/bin/bash
while ISF= read line
do
if [[ $line = *"="* ]]
then
echo $line >> Lines
fi
done < "unix"
I REALLY NEED HELP
THANKS,
In: Computer Science
****C language****
char lName[][15] = {"Brum","Carroll","Carter","Dodson","Garbus", "Greenwood", "Hilliard", "Lee", "Mann", "Notz", "Pastrana", "Rhon", "Rodriguez", "Wilson", "Zimmerman"};
char fName [][15] = {"Natalie","Cody","Sophia","Dominic","Chandler","Caleb","Sydnee","Peyton","Brianna","Zachery","Kevin","Luke","Juan","Kelci","Adam"};
char middleInitial[15]={'N','L','X','L','O','L','M','B','S','T','J','C','P','D','Z'};
char dob[][11]={"05/27/1935","11/27/1971","10/17/2003","12/08/1990","11/25/1991","10/30/1992","09/22/1993","08/04/1994","07/11/1995","06/18/1996","05/28/1997","04/07/1998","03/12/1999","02/23/2000","01/15/2001"};
How would we make a list ordered by their age, oldest first, Print the patient's full name and then their age. Left justify the name and right justify the age.
Example:
Johnson, Fred N 80
**Half of the code is provided**
int patientAge[15] = {0};
for(int p = 0; p <15; p++)
{
int year = ((dob[p][6] - '0') * 1000) + ((dob[p][7] - '0') *100) +
((dob[p][8] - '0') * 10) + ((dob[p][9] - '0') * 1);
patientAge[p] = 2019 - year;
printf("%s, %s %c Age: %d\n",lName[p], fName[p], middleInitial[p],
patientAge[p]);
}
In: Computer Science
USING MATLAB:
Using the data from table below fit a fourth-order polynomial to the data, but use a label for the year starting at 1 instead of 1872. Plot the data and the fourth-order polynomial estimate you found, with appropriate labels. What values of coefficients did your program find? What is the LMS loss function value for your model on the data?
| Year Built | SalePrice |
| 1885 | 122500 |
| 1890 | 240000 |
| 1900 | 150000 |
| 1910 | 125500 |
| 1912 | 159900 |
| 1915 | 149500 |
| 1920 | 100000 |
| 1921 | 140000 |
| 1922 | 140750 |
| 1923 | 109500 |
| 1925 | 87000 |
| 1928 | 105900 |
| 1929 | 130000 |
| 1930 | 138400 |
| 1936 | 123900 |
| 1938 | 119000 |
| 1939 | 134000 |
| 1940 | 119000 |
| 1940 | 244400 |
| 1942 | 132000 |
| 1945 | 80000 |
| 1948 | 129000 |
| 1950 | 128500 |
| 1951 | 141000 |
| 1957 | 149700 |
| 1958 | 172000 |
| 1959 | 128950 |
| 1960 | 215000 |
| 1961 | 105000 |
| 1962 | 84900 |
| 1963 | 143000 |
| 1964 | 180500 |
| 1966 | 142250 |
| 1967 | 178900 |
| 1968 | 193000 |
| 1970 | 149000 |
| 1971 | 149900 |
| 1972 | 197500 |
| 1974 | 170000 |
| 1975 | 120000 |
| 1976 | 130500 |
| 1977 | 190000 |
| 1978 | 206000 |
| 1980 | 155000 |
| 1985 | 212000 |
| 1988 | 164000 |
| 1990 | 171500 |
| 1992 | 191500 |
| 1993 | 175900 |
| 1994 | 325000 |
| 1995 | 236500 |
| 1996 | 260400 |
| 1997 | 189900 |
| 1998 | 221000 |
| 1999 | 333168 |
| 2000 | 216000 |
| 2001 | 222500 |
| 2002 | 320000 |
| 2003 | 538000 |
| 2004 | 192000 |
| 2005 | 220000 |
| 2006 | 205000 |
| 2007 | 306000 |
| 2008 | 262500 |
| 2009 | 376162 |
| 2010 | 394432 |
In: Computer Science
| Number | Year | Gross Income | Price Index | Adjusted Price Index | Real Income |
| 1 | 1991 | 50,599 | 136.2 | 1.362 | 37150.51 |
| 2 | 1992 | 53,109 | 140.3 | 1.403 | 37853.88 |
| 3 | 1993 | 53,301 | 144.5 | 1.445 | 36886.51 |
| 4 | 1994 | 56,885 | 148.2 | 1.482 | 38383.94 |
| 5 | 1995 | 56,745 | 152.4 | 1.524 | 37234.25 |
| 6 | 1996 | 60,493 | 156.9 | 1.569 | 38555.13 |
| 7 | 1997 | 61,978 | 160.5 | 1.605 | 38615.58 |
| 8 | 1998 | 61,631 | 163.0 | 1.630 | 37810.43 |
| 9 | 1999 | 63,297 | 166.6 | 1.666 | 37993.40 |
| 10 | 2000 | 66,531 | 172.2 | 1.722 | 38635.89 |
| 11 | 2001 | 67,600 | 177.1 | 1.771 | 38170.53 |
| 12 | 2002 | 66,889 | 179.9 | 1.799 | 37181.21 |
| 13 | 2003 | 70,024 | 184.0 | 1.840 | 38056.52 |
| 14 | 2004 | 70,056 | 188.9 | 1.889 | 37086.29 |
| 15 | 2005 | 71,857 | 195.3 | 1.953 | 36793.14 |
The data from Exhibit 3 is also in the Excel file income.xls on the course website. Use Excel, along with this file, to determine Mrs. Bella’s real income for the last fifteen years. Do this by first converting each price index from percent by dividing by 100. Then, divide gross income by your converted (adjusted) price index. Using Excel, find the mean, median, standard deviation, and variance of her past real income. Explain the meaning of these statistics. Can you use mean income to forecast future earnings? Take into account both statistical and non-statistical considerations.
In: Math