In: Mechanical Engineering
1. To obtain a solution through superposition of solutions, describe the number of non-homogeneous conditions per sub-problem.
2. Discuss the relevance of shape factors.
3. Discuss the requirements for using the superposition method of solution.
4. Describe what is meant by eigen-problem solution in heat transfer.
5. Discuss what values a separation constant can attain during the separation of variables method of solution.
1.
Assume we have two different solutions x(t)=α(t)x(t)=α(t) and x(t)=β(t)x(t)=β(t) to an ODE of the form dxdt+a(t)x(t)=0dxdt+a(t)x(t)=0.
For the two curves to cross each other we have to have :
α(t0)=β(t0)α(t0)=β(t0) and ddtα(t)|t=t0≠ddtβ(t)|t=t0ddtα(t)|t=t0≠ddtβ(t)|t=t0 for some t0t0 .
With dαdt+a(t)α(t)=0dαdt+a(t)α(t)=0 and dβdt+a(t)β(t)=0dβdt+a(t)β(t)=0 we see that this is impossible. Because the curves cannot cross they must be uniquely determined by their value for some t=t0t=t0.
Now in a second order ODE like d2xdt2+a(t)dxdt+b(t)x=0d2xdt2+a(t)dxdt+b(t)x=0 , two solutions can cross each other. But when we assume two solutions with : α(t0)=β(t0)α(t0)=β(t0) and ddtα(t)|t=t0=ddtβ(t)|t=t0ddtα(t)|t=t0=ddtβ(t)|t=t0 and d2dt2α(t)|t=t0≠d2dt2β(t)|t=t0d2dt2α(t)|t=t0≠d2dt2β(t)|t=t0 for some t0t0 we again see that this cannot be done.
So for two solutions α(t)α(t) and β(t)β(t) with : α(t0)=β(t0)α(t0)=β(t0) we see that the curves of dαdtdαdt and dβdtdβdt can never cross each other because when dαdtdαdt and dβdtdβdt are equal in t0t0 then their derivatives ddt(dαdt)ddt(dαdt) and ddt(dβdt)ddt(dβdt) must also be equal.
Thus curves of dxdtdxdt are uniquely determined by the choice of two boundary conditions : x(t0)x(t0) and dxdt|t=t0dxdt|t=t0 . By integration of dxdtdxdt keeping the boundary conditions fixed, this in turn means that x(t)x(t)must be completely determined by this choice of boundary conditions.
Same goes for higher order linear homogeneous ODE's of course.
The only thing we have to check now for the solution to be the complete solution is that we must be able to reach all possible function values with the constant coefficients we use in our solution. This can be done using the Wronskian determinant.
2. Shape Factor is a dimensionless number that characterizes the efficiency of the shape, regardless of its scale, for a given mode of loading, e.g. bending, torsion, twisting, etc.
We all know that type of material governs the strength of structural member. Similarly, the cross-sectional shape of a member is responsible to enhance the load bearing capacity of section.
An engineering material confirms to a modulus and strength, but it can be made stiffer and stronger when loaded under bending or twisting by shaping it into an I-beam or a hollow tube, respectively. It can be made less stiff by flattening it into a leaf or winding it, in the form of a wire, or into a helix. ‘Shaped’ sections (i.e. cross-section formed to a tube, a box-section, an I-section or the like) carry bending, torsional, and axial-compressive loads more ‘efficiently’ (i.e. for a given loading conditions, the section uses as little material as possible) than solid sections. The efficiency can be enhanced by introducing sandwich panels of the same or different materials.
But when choosing shapes one has to be careful so the basic functional requirement is not violated.
3. The strategy used in the Superposition Theorem is to eliminate all but one source of power within a network at a time, using series/parallel analysis to determine voltage drops (and/or currents) within the modified network for each power source separately.
4.
an eigenvalue problem is a problem that looks as if it should have continuous answers, but instead only has discrete ones. The problem is to find the numbers, called eigenvalues, and their matching vectors, called eigenvectors. This is extremely general—it is used in differential equations (because solutions to linear differential equations form linear spaces!) and described in detail in linear algebra.
The basic idea for linear equations is this: Ax = b has a unique solution if A is invertible, and has either many or no solutions if not. If it were basic algebra, we would divide both sides of the equation by A, but division is not defined for matrices. So we work around it; if your calculator’s division button were stuck, and you needed to divide a number by 2, what would you do? You could multiply by 0.5, and get the answer. 0.5 is the (multiplicative) inverse of 2. So we want to solve by finding the matrix A^(-1) that we could multiply the equation Ax = b by, and get A^(-1) A x = A^(-1) b = I x = x.
The basic idea for eigenvalues is this: Does Ax = Lx for some vector x and number L? (Usually lambda is used for L.)
Imagine a 3x3 matrix A that transforms vectors in 3D into other vectors in 3D. The eigenvalue question is this: what vectors are not rotated by A? And if they aren’t rotated, are they reversed, smaller, larger, or the same magnitude? What number L is the length of eigenvector x multiplied by when A acts on it?
Some matrices don’t have real eigenvalues or eigenvectors— a rotation matrix in 2 D, for example, rotates everything. (The zero vector 0 doesn’t count, because that always works trivially.) Some matrices have one or more; some have a “full set” of n eigenvectors in n dimensions. Those matrices we call “diagonalizable.” This is important in quantum mechanics because if you can diagonalize a matrix that means you can measure a thing and get specific answers.