Questions
BIG Corporation produces just about everything but is currently interested in the lifetimes of its batteries,...

BIG Corporation produces just about everything but is currently interested in the lifetimes of its batteries, hoping to obtain its share of a market boosted by the popularity of portable CD and MP3 players. To investigate its new line of Ultra batteries, BIG randomly selects 1000 Ultra batteries and finds that they have a mean lifetime of 927 hours, with a standard deviation of 83 hours. Suppose that this mean and standard deviation apply to the population of all Ultra batteries. Complete the following statements about the distribution of lifetimes of all Ultra batteries.

(a) According to Chebyshev's theorem, at least ______ of the lifetimes lie between 678 hours and 1176 hours.

(b) According to Chebyshev's theorem, at least ______ of the lifetimes lie between 761 hours and 1093 hours.

(c) Suppose that the distribution is bell-shaped. According to the empirical rule, approximately ________ of the lifetimes lie between 761 hours and 1093 hours.

(d) Suppose that the distribution is bell-shaped. According to the empirical rule, approximately 68% of the lifetimes lie between ______ hours and ______ hours.

In: Statistics and Probability

From the time of early studies by Sir Francis Galton in the late nineteenth century linking...

From the time of early studies by Sir Francis Galton in the late nineteenth century linking it with mental ability, the cranial capacity of the human skull has played an important role in arguments about IQ, racial differences, and evolution, sometimes with serious consequences. (See, for example, S.J. Gould, "The Mismeasure of Man," .) Suppose that the mean cranial capacity measurement for modern, adult males is cc (cubic centimeters) and that the standard deviation is cc. Complete the following statements about the distribution of cranial capacity measurements for modern, adult males. (a) According to Chebyshev's theorem, at least ? of the measurements lie between 565 cc and 1481 cc. (b) According to Chebyshev's theorem, at least 36% of the measurements lie between and . (Round your answer to the nearest integer.) (c) Suppose that the distribution is bell-shaped. According to the empirical rule, approximately 99.7% of the measurements lie between and . (d) Suppose that the distribution is bell-shaped. According to the empirical rule, approximately ? of the measurements lie between 565 cc and 1481 cc.

In: Advanced Math

Solve the question below and show all steps: In a random sample of 28 female students...

Solve the question below and show all steps: In a random sample of 28 female students at a college, it was discovered that each of them possessed multiple hand bags. The number of hand bags possessed by each of the 28 female students is listed as follows: 4 4 3 3 3 6 4 2 2 2 1 3 3 3 3 4 4 3 2 8 2 2 3 4 3 3 4 2

a. Find the mean and standard deviation for the sample data set.

b. Form the interval, ?̅±2s

c. According to Chebychev’s rule, what proportion of sample observations will fall within the interval in part b?

d. According to the Empirical Rule, what proportion of sample observations will fall within the interval in part b?

e. Determine the actual proportion of sample observations that fall within the interval in part b. Does the Empirical Rule provide a good estimate of the proportion?

In: Statistics and Probability

It's a Christmas time and so I hope I'll be pardoned for asking a question which...

It's a Christmas time and so I hope I'll be pardoned for asking a question which probably doesn't make much sense :-)

In standard undergraduate nuclear physics course one learns about models such as Liquid Drop model and Shell model that explain some properties of nucleus.

But as I understand it these models are purely empirical and most importantly incompatible. Perhaps it's just my weak classical mind but I can't imagine same object being described as a liquid with nucleons being the constituent particles floating freely all around the nucleus and on the other hand a shell model where nucleons occupy discrete energy levels and are separated from each other.

Now I wonder whether these empirical models are really all we've got or whether there are some more precise models. I guess one can't really compute the shape of the nucleus from the first principles as one can do with hydrogen atom in QM. Especially since first principles here probably means starting with QCD (or at least nucleons exchanging pions, but that is still QFT). But I hope there has been at least some progress since the old empirical models. So we come to my questions:

Do we have a better model for description of a nucleus than the ones mentioned?

How would some nuclei (both small and large) qualitatively look in such a better model? Look here means that whether enough is known so that I could imagine nucleus in the same way as I can imagine an atom (i.e. hard nucleus and electrons orbiting around it on various orbitals).

What is the current state of first-principles QCD computations of the nucleus?

In: Physics

1) The worksheet Engines in the HW8 data workbook on Moodle describe a suppliers shipments of...

1) The worksheet Engines in the HW8 data workbook on Moodle describe a suppliers shipments of engines per year to their customers from 1999 through 2018.

a) Use simple regression with Shipments as the independent or Y variable and Year as the dependent or X variable to fit the data. Determine MAE, MSE and MAPE for the simple regression model. Construct a chart that has the observed data and the fit line by Year. Use the simple regression model to predict Shipments for years 2019 and 2020.

b) Use a three time period Moving Average to fit the rate data. Determine MAE, MSE and MAPE for the Moving Average model. Construct a chart that has the observed data and the fit line by Year. Use the Moving Average model to predict Shipments for years 2019 and 2020.

c) Use exponential smoothing with a smoothing constant of 0.15 to fit the data. Determine MAE, MSE and MAPE for the exponential smoothing model. Use the model to forecast Shipments for years 2019 and 2020.

d) Short answer. Which of the three above forecasting models (simple regression, moving average and exponential smoothing) would you use to model the data and why would you use that model.

Year Shipments
1999 157
2000 168
2001 186
2002 171
2003 198
2004 222
2005 246
2006 233
2007 342
2008 413
2009 517
2010 588
2011 600
2012 524
2013 384
2014 403
2015 522
2016 604
2017 815
2018 955

In: Statistics and Probability

1-Jan2010 ND 4-Jan-2010 92.5500 3-Jan-2011 81.5600 2-Jan-2012 ND 3-jan-2012 76.6700 1-Jan-2013 ND 2-Jan-2013 87.1000 1-Jan-2014 ND...

1-Jan2010

ND

4-Jan-2010

92.5500

3-Jan-2011

81.5600

2-Jan-2012

ND

3-jan-2012

76.6700

1-Jan-2013

ND

2-Jan-2013

87.1000

1-Jan-2014

ND

2-Jan-2014

104.8400

1-Jan-2015

ND

2-Jan-2015

120.2000

1-Jan-2016

ND

4-Jan-2016

119.3000

2-Jan-2017

ND

3-Jan-2017

117.6800

1-Jan-2018

ND

2-Jan-2018

112.1800

1-Jan-2019

ND

2-Jan-2019

109.2200

1-Jan-2020

ND

2-Jan-2020

108.4300

Look at the data for the Japanese yen from 2000 to the present. Assume that you were in Tokyo for New Year’s Eve from January 1, 2010 to January 1 this year and bought a bento (box lunch) for 1000 yen each year. Convert this amount to dollars for the first day in January that data is available for each of the years you were in Tokyo.

  1. Create a chart that plots the dollar price of the bento box overtime on each of the days in January you used.
  2. Has the dollar appreciated or depreciated against the yen during this time period?
  3. What the least amount in dollars that your box lunch cost ($ amount and date)?
  4. What the most amount in dollars that your box lunch cost ($ amount and date)? please Make sure you calculate the US$ price of the bento box for every January between 2010 and the present. please no handwriting

In: Economics

In Java: Write two advanced sorting methods of your choice: (Shell Sort OR Radix Sort) AND...

In Java:

Write two advanced sorting methods of your choice: (Shell Sort OR Radix Sort) AND (Merge Sort OR Quick Sort). If you choose Shell Sort, experiment with different incremental sequences to see how they affect the algorithm's run time efficiency (count the number of comparisons and exchanges). If you choose to implement Radix Sort, answer the following question as well: Can you write a version of Radix Sort for String objects? If yes, explain how. If not, explain why.

Assume that input data is generated randomly and stored in a text file.

You will experiment with your program in two steps:

Step 1: Experimenting with a prototype data (integers from 1 to 10) to ensure that your implementation works correctly and the results match expectations. The results must be reported in a table format (not generated by the program, but collected manually from multiple program runs) in the a Word document as follows:

                 best case         worst case        average case

          char.1...char.N   char.1...char.N   char.1...char.N

alg.1        ...       ...             ...     ...                 ...      ...

alg.2        ...       ...             ...     ...                 ...      ...

  ...  

alg.N      ...       ...              ...     ...                 ...      ...

  

Step 2: Experimenting with large data sets of 2000 elements. The results must be reported in the same table format.

In addition, in the report, explain the empirical results generated by your program comparing them to the known theoretical results paying special attention to any discrepancies which must be clearly explained.

Must submit for grading: the java code, the random text file(s) generated by an independent module (you may need multiple random text files to better characterize the average efficiency of the respective algorithm), AND MOST IMPORTANTLY the report as explained above a Word document.

In: Computer Science

You have the following data on quantity demand of commodity X and its price and other factors during 1991-2005:-

                                                                       Assignment 1( New Version)

You have the following   data on quantity demand of commodity X and its price and other factors during 1991-2005:-

year

Quantity ( Q)

KG

Expenditures ( M)

NIS

Price of X ( Px)

NIS/KG

Price of Substitutes   (Py )

NIS/KG

1991

4.0

400

9

10

1992

4.5

500

8

14

1993

5.0

600

9

12

1994

5.5

700

8

13

1995

6.0

800

7

11

1996

7.0

900

6

15

1997

6.5

1000

6

16

1998

6.5

1100

8

17

1999

7.5

1200

5

22

2000

7.5

1300

5

19

2001

8.0

1400

5

20

2002

10.0

1500

3

23

2003

9.0

1600

4

18

2004

9.5

1700

3

24

2005

8.5

1800

4

21

Based on the above data:-

  1. Draw the relationship between Q & P?
  2. Using the OLS, estimate the demand function in linear form.
  3. Comments on the results by taking into account any prior expectations you have about demand functions.

      4) Compute the predictable value of the dependent variable & the residuals?

     5) How much the change in Px, Py and Expenditures ( M) explains the variations in Q?

       6) Interpret the empirical results of the estimated equation?

        7) Calculate demand elasticities at the mean.

        8) Construct a confidence internal at 95% of estimated own price elasticity at the

            mean   and in the year of 2005?

         10) Construct a confidence interval of the quantity demanded in the years 2005 and in

         the year 2008 when Px=7, Py=3.5 ,Expenditures =1900

In: Statistics and Probability

Match each description with the appropriate term.       -       A.       B....

Match each description with the appropriate term.

      -       A.       B.       C.       D.       E.       F.       G.       H.       I.       J.   

Subset of corporate governance that focuses on the management and assessment of strategic IT resources.

      -       A.       B.       C.       D.       E.       F.       G.       H.       I.       J.   

All data processing is performed at a central site.

      -       A.       B.       C.       D.       E.       F.       G.       H.       I.       J.   

The central IT function is organized into small IT units that are placed under the control of end users.

      -       A.       B.       C.       D.       E.       F.       G.       H.       I.       J.   

The ability of the system to continue operation when part of the system fails.

      -       A.       B.       C.       D.       E.       F.       G.       H.       I.       J.   

Using parallel disks with redundant data and applications so if one disk fails, lost data can be reconstructed.

      -       A.       B.       C.       D.       E.       F.       G.       H.       I.       J.   

A statement of all actions to be taken before, during and after any type of disaster.

      -       A.       B.       C.       D.       E.       F.       G.       H.       I.       J.   

The pooling of physical storage from multiple devices into what appears to be a single virtual storage device.

      -       A.       B.       C.       D.       E.       F.       G.       H.       I.       J.   

Replaced SAS 70 and is the definitive standard by which auditors can gain knowledge that processes and controls at third-party vendors are adequate to prevent or detect material errors.

      -       A.       B.       C.       D.       E.       F.       G.       H.       I.       J.   

An agreement between organizations to aid each other with data processing in a disaster.

      -       A.       B.       C.       D.       E.       F.       G.       H.       I.       J.   

Hot site plan that is a fully equipped site that many companies share.

A.

IT Governance

B.

Recovery Operations Center

C.

Centralized Data Processing

D.

Fault Tolerance

E.

SSAE 16

F.

Disaster Recovery Plan

G.

Redundant Arrays of Independent Disks (RAID)

H.

Mutual Aid Pact

I.

Distributed Data Processing (DDP)

J.

Storage Virtualization

In: Accounting

same in 1950 and 2000, or was it higher in one of these years than the...

same in 1950 and 2000, or was it higher in one of these years than the other? To investigate this question, a simple random sample of 500 Californians was selected from the 1950 Census, and an independent random sample of 500 Californians was selected from the 2000 Census. 1. Does this study involve random sampling, random assignment, both or neither? Explain 2. State the appropriate null and alternative hypothesis, in symbols and in words. It turned out that 219 of the 500 Californians in the 1950 sample had been born in California, compared to 258 of the 500 Californians in the 2000 sample. 3. Comment on whether the technical conditions of the two –sample Z-test are satisfied 4. Use these sample data to calculate the test statistic and p-value. 5. Do the two samples proportions differ significantly at α=0.05? 6. Draw your conclusion on this research.

In: Statistics and Probability