Talk:Cramer's rule
This is the talk page for discussing improvements to the Cramer's rule article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
This level-5 vital article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
|
When did Gabriel Cramer come up with Cramer's Rule?
ANS: Around 1750, two years before his death. He died without giving an explanation of how exactly he came up with the rule. There is some speculation that Colin Maclaurin came up with the same rule in 1729 but there must have been a paper shortage in Scotland because it was never published until after he died.
A search on google for "cramer's rule maclaurin" comes up with several pages discussing who came up with it first. This one (http://www.findarticles.com/p/articles/mi_qa3789/is_200110/ai_n8969722) says that Maclaurin published it in 1748, two years before Cramer did, but that Cramer's probably gained popularity due to his superior notation.
More precise statement
editAny chance someone is willing to give a more precise statement of Cramer's rule? As in the following: Let R be a commutative ring, M a finitely generated R-module with generators , and an endomorphism of R-modules. determines a matrix
via . Then Adj(A)A=det(A)I, where Adj(A) is the cofactor matrix of A.
- That is not Cramer's rule, though it is related, but in a not so trivial way. There is no general rule for solving square linear systems over a commutative ring for the simple reason that they generally do not have unique solutions (try solving in Z/6Z). Something can be said (if the determinant of the system happens to be an invertible element of the ring for instance) but I'm not sure this article is the right place. Marc van Leeuwen (talk) 14:55, 7 September 2009 (UTC)
Can we also get an easier to understand version of Cramer's rule? If anyone could previde that I'd be mighty grateful.Phinixx (talk) 21:46, 7 January 2014 (UTC)
- It might help if you say what you find hard to understand. (For a WP math article, that version is pretty straight-forward!) -- Elphion (talk) 17:51, 8 January 2014 (UTC)
Does the example of comparative statics in econ make any sense?
editam I right that there's nothing special about comparative statics, it's just an example of solving a system of linear equations, nothing particularly about cramer's rule. This latest edit says it's of 'further significance', but it doesn't seem to be. --Tristanreid
- I don't find that example much useful either. Oleg Alexandrov (talk) 23:52, 17 October 2005 (UTC)
- ok, I'm yanking it. Thanks for signing me, by the way. I forgot last time. Tristanreid 02:44, 19 October 2005 (UTC)
Where did Gaon come from?
editVilna Gaon seems to have come out of left field. I note that the main article on him includes a similar statement without attribution. Does anyone have a reference for this statement? S.N. Hillbrand 20:45, 22 December 2005 (UTC)
- Thanks for removing that. Looked like a strange edit. Oleg Alexandrov (talk) 01:25, 23 December 2005 (UTC)
I actually remember it being printed in my algebra book as a possible etymology. I really don't know if there is a credible source for it. I think they might have cast doubt on it themselves. I have no idea how it even got started, but unless I am mistaken, it was in my HBJ algebra book issued by the Los Angeles Unified School District. PhatJew 19:18, 9 March 2006 (UTC)
The reference to the Gaon is a legitimate one that is quoted in many sources. It should be kept as an alternative entymology.
Where has the nice proof gone?
editAbout a year ago there was a beautiful proof of the Cramer's rule on this wikipedia page. Can you, please, bring it back? Thanks Pavel.Pokorny@vscht.cz —Preceding unsigned comment added by 147.33.113.54 (talk) 17:05, 6 December 2007 (UTC)
Incorrect Example
editThe 4th image in the example section is incorrect. The determinant in the numerator is calculated in correctly. Sorry I cannot fix it right now,will someone else?--128.101.152.71 (talk) 21:12, 10 March 2008 (UTC)
Re: Relevance to eigenvalues
edit- "The numerator will always be zero, since here c=0."
I can't find any previous reference to c. Is this a remnant of an earlier edit? -- 72.224.136.152 (talk) 14:41, 23 September 2009 (UTC)
- OK, the problem came from an earlier edit. Variable c was changed to b but a later reference to it remained unchanged. -- TheMaestro (talk) 02:08, 24 September 2009 (UTC)
All right, thank you. But here is another question: what does this section have to do in this article in the first place? Cramer's rule is about the case where the coefficient determinant is non-zero, in which case it gives an explicit expression for the solution. But in the case of homogeneous linear equations this is quite unnecessary: it is obvious that the zero vector will always be a solution, so you don't need a complicated formula to tell you that in case the coefficient determinant is non-zero. The only relevant point here is that the solution will be non-unique precisely when that determinant is zero; although that could possibly be considered to be a (minor) part of the statement of the theorem called Cramer's rule, it is not even mentioned in the current article. (If it were, it should be mentioned more precisely that coefficient determinant zero implies either a non-unique solution for the whole system or no solution at all, since the article talks about non-homogeneous systems; also beware that when the determinant is zero there could still be a unique solution for some of the unknowns.) My feeling is that this subsection does not belong here, and should be removed, or possibly moved to an article is it more relevant to. Marc van Leeuwen (talk) 06:50, 24 September 2009 (UTC)
- I agree, I think the connection to Cramer's rule is tangential at best, in more senses than one. -- TheMaestro (talk) 01:50, 25 September 2009 (UTC)
Cramer's Rule is useless
editI would dispute all of the (limited) claims for useful applications for Cramer's rule. The cost of applying the rule grows exponentially with dimension and I can't imagine why anyone would want to use it in practice.
Certainly, it has no role in the solution of linear systems, however small. The claim at the start of the article that "as no pivoting is needed, it is more efficient than Gaussian elimination for small matrices" is plainly wrong...pivoting is extremely efficient and improves the quality of the solutions. Would anyone like to claim a stability result for Cramer's rule? And the best way to compute the determinants used in Cramer's rule is to use Gaussian elimination, anyway. Furthermore, why would anyone use a SIMD machine to solve small linear systems? How on earth can you do this efficiently? The phrase "This formula is, however, of limited practical value for larger matrices" would be better stated as This formula is, however, of NO practical value" wherever it appears.
"Cramer's rule is of theoretical importance in that it gives an explicit expression for the solution of the system". Why is this important? Why is this better than x = A^{-1}b? Cramer's rule can be used to give an explicit representation of an individual component independently of all other components...I have heard this used as justification for its importance but, again, this could be done much more efficiently with Gaussian elimination (exploiting the resulting LU factorization).
Can anyone substantiate the line "Cramer's rule is also extremely useful for solving problems in differential geometry"? I would expect that anything it can do can be done better another way. But I am willing to be proved wrong.
Applications to algebra/relevance to eigenvalues: is it really worth mentioning that the rule can be used to prove well known results in an ugly way?
I apologize to the Cramer family for being so negative, but I have seen no evidence that the rule is anything more than a historical curiosity.
If no-one wants to refute my claims, I will happily edit the page to conform with my world view. Not that it will stop climate change, or anything :-) —Preceding unsigned comment added by 130.159.17.136 (talk) 17:18, 10 December 2009 (UTC)
- First, wikipedia articles should not be written to confirm with any particular world view, but should reflect general consensus. If everyone considered Cramer's rule useless, then mankind would have forgotten about it long ago and this article would not have existed.
- Now some more specific points. Although the greatest importance of Cramer's rule is definitely not in computer programming (and Cramer was not into computer science), it seems likely that, at least when one comes across a situation where a program needs the solution of a specific 2×2 system, the easiest solution will be to code the solution by Cramer's rule directly rather than to invoke some general equation-solving procedure involving Gaussian elimination. You say writing x = A^{-1}b is a theoretical alternative to Cramer's rule, but it is of no use unless you have an expression for A^{-1}. Such an expression exists... and is a blown-up version of Cramer's rule! In fact if you imagine any system where the coefficients are not completely known numbers (say they depend on one or more unspecified parameters; the worst case is that all coefficients are independent parameters), then one cannot apply Gaussian elimination or any other method that needs to make decisions depending on the values of the coefficients. Cramer's rule on the other hand needs to make no decisions at all (all that matters is that the denominator is nonzero, which is the condition for a unique solution to exist in the first place) and can be applied to such a system. That is exactly why the rule is of theoretical importance. There are many occasions in algebra where it is important for reasoning to know that an expression of a certain kind exists, even if one has no desire to actually use such an expression in computation. Marc van Leeuwen (talk) 08:23, 11 December 2009 (UTC)
- Thank you for your nice answer. Looking at modern texts on matrix algebra, I don't think I'm alone in branding the method useless and I'm getting impatient for consensus to move my way. I've bored my colleagues into submission, and now it's time for the rest of the world!
- Let me address your specific points. With reference to the explicit representation of the solution of a 2x2 system, I agree that it would seem contrary not to use a formula involving the determinant. However, this solution existed well before Cramer's rule. For bigger systems, the story changes. Even an explicit expression for 3x3 systems involving expanded determinants would be inefficient and offers no insight. I would draw an (imperfect) analogy with polynomials: compare the use of the quadratic formula, which is ubiquitous in textbooks and considered useful, with the cubic and quartic formulae. So I'll modify my claim to "Cramer's rule is useless for n > 2".
- Let me move on to the theoretical importance. Once it is known that an inverse exists, I do not see how the fact that hypothetically one could write down an explicit representation of the inverse in terms of its entries using Cramer's rule adds anything to algebraic reasoning. As you imply, one cannot prescribe a general order of pivoting which will guarantee that Gaussian elimination does not encounter a zero denominator. But for any value the entries can take, I know that such an ordering exists so long as the matrix is nonsingular. Thus, as with Cramer's rule, I know an expression of a certain kind exists, even if I have no desire to use it explicitly. In fact, I know all sorts of expressions exist simply as a consequence of the nonsingularity of the system. This simple fact is extremely useful with or without an explicit formula. What, in particular, does Cramer's rule add? I would contend that explicit use of Cramer's rule is relegated, again, to the case n = 2. —Preceding unsigned comment added by 130.159.17.136 (talk) 17:36, 14 December 2009 (UTC)
- It would be nice if you could cite algebra text that states Cramer's rule and then says that it is plain useless (not just "not very useful for computation"). Actually even for computation it is not that bad, given that one may evaluate the determinants in any way one likes, and there are certainly methods that do not need (super)exponential time in the size to do that, at least when coefficients are numbers. But this is not the main point, most people will agree that solving an explicit linear system is usually better done by other means.
- However for you theoretical remarks I firmly disagree. One theoretical point of huge importance is that the inverse of a square matrix exists if and only if its determinant is invertible (I say that rather than "nonzero", because I am also referring to matrices with entries in a ring rather than a field, such as matrices with polynomial entries). This fact is very closely related to Cramer's rule (some would say it is part of Cramer's rule, and in any case it can be derived immediately from it), but there is no way one could arrive at this result by just studying what happens in Gaussian elimination. So even your starting point "once it is known that an inverse exists" is using Cramer's rule: unless your knowledge that an inverse exists is based on actually knowing the inverse (in which case there is nothing left to do), I cannot see how you could be sure without knowing the determinant sufficiently well to be sure it is invertible. When you say "for any value the entries can take, I know that such an ordering exists so long as the matrix is nonsingular" this is true only if the entries take values in a field (where every nonzero element is invertible); with entries that are polynomials for instance, it could well be that the matrix is (always) invertible, but yet one cannot perform Gaussian elimination.
- Here is a theoretical question that Gaussian elimination cannot answer for you, but Cramer's rule can: suppose you have a system depending on (one or more) parameters that can vary continuously, and its coefficient matrix is always nonsingular, so that for all parameter values there is a unique solution. Does this unique solution vary continuously with the parameters? Using Gaussian elimination, there would be discontinuous changes in the solution procedure, as coefficients becoming zero at some parameter values force you do change pivots, and one cannot see why this could not cause discontinuous changes in the solution. From Cramer's rule however continuity is immediate: the solution is expressed as quotient of two continuously varying expressions, of which the denominator never becomes zero (because this would make the coefficient matrix singular), so the result certainly varies continuously with the parameters. This is by no means the only theoretical result where Cramer's rule (or some result equivalent to it) plays a role, there are many more in abstract algebra, but it provides a simple concrete example. Marc van Leeuwen (talk) 08:01, 15 December 2009 (UTC)
- Marc van Leeuwen pretty much says it. There is a widespread dogma that determinants in general (and Cramer's rule in particular) are "useless", seemingly perpetuated by mathematicians whose definition of usefulness seems to be proportional to their own ability to write fast Fortran code. I have always dismissed such claims. Determinants are used throughout mathematics, and anyone who believes that they are "useless" needs to get out more. Sławomir Biały (talk) 17:04, 16 December 2009 (UTC)
- With regard to one of the questions above, regarding applications of Cramer's rule to differential geometry, many of the properties of the Hodge star operator tacitly rely on Cramer's rule and variants thereof. Sławomir Biały (talk) 17:05, 16 December 2009 (UTC)
- Let me approach this differently then, as I am not an abstract algebraist. None of the applications for Cramer's rule given on this page back up the claim for it's theoretical importance. If you have convincing examples, which it sounds like you do, why not add them. As a computational tool, though, Cramer's rule should be avoided and the literature that claims it is effective as an algorithm on parallel machines seems to be based on the conception that the formula for individual components offers something that the formula e_i^T A^{-1} b does not. This formula makes it clear that each individual component of the solution is, in general, dependent on every entry of the inverse matrix. Having read through a lot of papers making claims for Cramer's rule-based parallel algorithms (including misleading or unverifiable claims about stability) I certainly feel justified in rewriting the relevant sections once I've collated the appropriate references. One point of agreement is that the computational cost of Cramer's rule is over stated in many texts but, even if the determinant is computed efficiently, it is still impractical.
- On the more theoretical side, I think that there is an undue emphasis on determinants in linear algebra. In my opinion, this is down to the historical precedence of determinants over matrices. However, I will willingly accept that determinants are a fundamental tool. But apart from anything self-referential, the fundamental results in matrix algebra (over R or C) can be stated and proved without determinants (see "Down with Determinants" in AMM, for example) by using eigenvalues, for example, as atoms. And I don't see any need to throw personal insults at people who disagree with me. I don't think anyone has lived until they've expanded a 5x5 determinant! —Preceding unsigned comment added by 130.159.17.136 (talk) 13:30, 17 December 2009 (UTC)
- The Bareiss algorithm is a computationally efficient multistep algorithm implementing Cramer's rule. It is supposed also to be more numerically stable than ordinary Gaussian elimination for matrices that are "nearly singular", but I do not know in what sense this is true. Most computer algebra systems include this in the symbolic toolkit because it works over arbitrary commutative rings. Also, while it is probably true that many of the main results of matrix algebra can be achieved without determinants, at least over the complex field where eigenvalues are available, this is not a convincing argument that determinants are "useless". Pick a linear algebra text at random and look up "characteristic polynomial" in the index, for instance. In fact, here is an extremely instructive exercise that should explode any doubts about the usefulness of determinants: Working exclusively over the rational field (no field extensions are allowed), devise an efficient way to compute exactly the characteristic polynomial of a matrix without the use of determinants. The restriction on not passing to a field extension is not as artificial as it might seem at first: exact representation of roots in a manner suitable for symbolic computation is itself a nontrivial task. If I posed this as a final project in my graduate algebra course, I wonder how many of my students would be able to do it. Sławomir Biały (talk) 14:29, 17 December 2009 (UTC)
- If your students have access to lambda matrices stuff or the proof of the rational canonical form stuff they are not going to have problems computing it without determinants. O(n^3) algorithms are known. [1] franklin 16:21, 17 December 2009 (UTC)
- This is the general idea, at least as a corollary of the structure theorem for modules over a PID. Still, there is a rather instructive gap between knowing the proofs of these results and being able to put together a practical method, would you not agree? Sławomir Biały (talk) 16:28, 17 December 2009 (UTC)
- Also there are polynomial-time algorithms for computing determinants as well. E.g., Dodgson condensation is O(n3) too. So arguments that determinants are more computationally intensive don't really wash. Sławomir Biały (talk) 16:32, 17 December 2009 (UTC)
- Characteristic polynomial and determinants are equivalent problems. I don't think any argument to rule out determinants can come out of it. About your students, they are graduate students, they will manage. Just let them have the tools. franklin 16:59, 17 December 2009 (UTC)
- Interesting point about determinants and the characteristic polynomial being equivalent. From that perspective, I suppose the difference is somewhat subjective. Sławomir Biały (talk) 17:06, 17 December 2009 (UTC)
- Let me reiterate: I am not calling determinants "useless". I recognize they are indispensable in many situations. I personally think that their significance is over-emphasized in many areas, though. I am not suggesting that the characteristic polynomial isn't a beautiful thing. But that is not the topic here. I come here only to damn Cramer's rule! I yield to those of you who claim that Cramer's rule is theoretically important in abstract algebra, but I contend that the examples given on the enclylopedia page do not bear this out.
- So let me return to something I still think can be emphasized, and that is that it is computationally useless. As has been pointed out, determinants can be computed efficiently in many ways (and inefficiently, too). Using Cramer's rule, an individual element of A^(-1)b requires the computation of 2 determinants. Every subsequent component requires a further n x n determinant computation. For the cost of a single determinant computation, the whole solution can be computed stably using an elimination method or through orthogonal transformations (as far as I can tell, the most efficient determinant computations are carried out using elimination techniques).
- The stability results I know for Cramer's rule are sketchy, but I know it's instability has been highlighted for 2x2 systems. Since it's efficient computation would be on the back of some variant of Gaussian elimination, I can't see it being more stable. I am not familiar with current implementations of the Bareiss algorithm. Can you give a reference for the stability claims? I suppose these depend on definitions of "nearness to singularity".
- The Bareiss algorithm is a computationally efficient multistep algorithm implementing Cramer's rule. It is supposed also to be more numerically stable than ordinary Gaussian elimination for matrices that are "nearly singular", but I do not know in what sense this is true. Most computer algebra systems include this in the symbolic toolkit because it works over arbitrary commutative rings. Also, while it is probably true that many of the main results of matrix algebra can be achieved without determinants, at least over the complex field where eigenvalues are available, this is not a convincing argument that determinants are "useless". Pick a linear algebra text at random and look up "characteristic polynomial" in the index, for instance. In fact, here is an extremely instructive exercise that should explode any doubts about the usefulness of determinants: Working exclusively over the rational field (no field extensions are allowed), devise an efficient way to compute exactly the characteristic polynomial of a matrix without the use of determinants. The restriction on not passing to a field extension is not as artificial as it might seem at first: exact representation of roots in a manner suitable for symbolic computation is itself a nontrivial task. If I posed this as a final project in my graduate algebra course, I wonder how many of my students would be able to do it. Sławomir Biały (talk) 14:29, 17 December 2009 (UTC)
(unindent) If one believes that Bareiss' algorithm is really a condensation-based approach to Cramer's rule, then one reference is:
- Bareiss, Erwin (1968), "Sylvester's Identity and Multistep Integer-Preserving Gaussian Elimination", Mathematics of computation, 22 (102): 565–578.
But here I don't see any precise claims of stability (though I thought is was the case). Anyway, I agree that the article here is basically crap. Sławomir Biały (talk) 17:24, 17 December 2009 (UTC)
- More theoretical examples of usefulness Bounding solutions of systems of equations, bounding number of solutions (discrete situations). Intense use is made in proves of transcendence of exponentials. In Roth's theorem's proof. The claim against Cramer's rule in many books is just done to discourage students to use it for large dimensions since according to the topics of the course they will be tempted to compute determinants expanding along a row or things like that instead of doing elimination which is a central goal in many courses. I believe the statement about the (computational) usefulness of Cramer's should be made (with emphasis in its methodological character) and not to be mentioned in almost every section without references (which is like it is now). franklin 18:34, 17 December 2009 (UTC)
Here's a small comment on some of the above issues. In the Wikipedia article on Newton's identities is described a procedure to compute the characteristic polynomial of a matrix using only matrix multiplication, traces, and other algebraic operations (but no determinants); I'm not sure about complexity but it definitely stays within the rationals. That article also gives an application of Cramer's rule, though its usefulness is a matter of taste: while a recursive procedure for expressing one family of polynomials in terms of another one is easily found, the result can only be expressed in a "closed" form by a determinant obtained from Cramer's rule.
The about the pros and cons of determinants, notably relative to eigenvalues. I think one needs to learn about both, and in my opinion determinants are slightly more elementary (they require knowing about about permutations and signs, but not about factoring polynomials). To me the most essential facts about determinants are
- The determinant is given by a polynomial expression in the coefficients.
- The determinant is multiplicative.
- A square matrix is invertible if and only if its determinant is.
If, as in the "Down with Determinants" paper mentioned one defines the determinant as the product of the eigenvalues, in the algebraic closure of the coefficient field and taken with appropriately defined multiplicities, then all these three points are problematic. As for the final one, it would tell us that over a field a square matrix is invertible unless the product of its eigenvalues is zero; never mind looking for them in an extended field, it does not require much genius to see that this means unless there is an eigenvalue zero, that is unless the matrix has a nontrivial kernel, so a square matrix over a field is invertible unless it isn't. See also some discussion here. Marc van Leeuwen (talk) 14:50, 26 December 2009 (UTC)
cramers rule proof
editI would like to add a section to this article to provide a proof of cramers rule. The proof is on my user page. Djwinters (talk) 00:56, 16 January 2010 (UTC)
- Do it. But please, type those equations, don't include those images. Or just put it and someone else will replace the images. franklin 01:47, 16 January 2010 (UTC)
- Another thing is that that proof is working only with three variables and three equations. franklin 01:48, 16 January 2010 (UTC)
- The proof is unclear in what it sets out to prove (the whole article could be clearer about the precise statement od Cramre's rule as well), and is phrased more like a recipe than like a proof. As far as I can tell it only aims at showing that the rule provides a solution in favorable cases (not that this is the only solution), and only works well if not only the denominator determinant is nonzero (which is part of the rule's hypotheses), but also all the individual xi in the solution. In other words I think the proof could be improved. It would also be good to provide a reference for this proof, or is it your own invention? Marc van Leeuwen (talk) 21:55, 16 January 2010 (UTC)
- I agree with the fact that other proves (maybe better ones) can be given. Now, I disagree with the fact that it doesn't work with systems with some zeroes in the solution. These are rational functions, there is a general principle (follows from analytic continuation and many other principles in complex analysis) that if these manipulations work for non-zero (as the proof shows) then it works in general. The uniqueness of the solution can be derived easily from the very statement of Cramer's formulas. franklin 22:29, 16 January 2010 (UTC)
- I modified the proof such that there is no apparent problem when there are zeroes in the solution. The thing is that even being the previous proof correct, if it raised this kind of concerns that can be explained in a somewhat complicated way it means that it won't be very stable. Many others can come asking the same question and that can generate instability. Also it would mean to have to be constantly sending people to read the explanation in the talk page. There is also a geometric proof of Cramer's rule. I will see if I generate the necessary images to include it. It is very simple. The only strange is the fact that the determinant is an area (or a volume), although that (depending on the context) can be assumed by definition. franklin 00:42, 17 January 2010 (UTC)
Mangled sentence
editThis appears in the article:
-
- Cramer's rule can be used to prove that an integer programming problem whose constraint matrix is totally unimodular and whose right-hand side has integer basic solutions.
Someone forgot to finish the sentence. Michael Hardy (talk) 03:44, 30 March 2010 (UTC)
Finding inverse matrix
editThis section of the article seems to be out of place. There is apparently no use made of Cramer's rule in finding the inverse matrix and no use made of inverse matrices elsewhere in the article. Should we remove it? SpinningSpark 17:38, 17 July 2010 (UTC)
- It is in an odd place. But my guess is that the majority of (at least American) college studends first last and only interaction w/ Cramer's rule will be as a means to find the inverse to 2x2 matrices. It doesn't belong in a lvl2 heading though. Protonk (talk) 19:02, 17 July 2010 (UTC)
- I have added a few lines to cramer's rule proof section to show the connection with the adjugate matrix. Djwinters (talk) 20:56, 12 August 2010 (UTC)
Time Complexity
editThere are two claims of the time complexity of the rule: is it O(n*n!) or is it O(n^3)? 161.31.73.176 (talk) 16:01, 29 January 2016 (UTC)
maths
editI want to learn how to calculate complementaru angles Gavazacarol (talk) 16:22, 4 May 2016 (UTC)
Cramer's rule and the determinant
editI could be very wrong about this, but isn't cramer's closely related to the history of the determinant? That should probably be mentioned or at least linked here, but preferably it could have its own section. Gyinagal (talk) 01:03, 28 September 2020 (UTC)
Proof & Cofactor coëfficients
editThe proof states C_j is a coefficient that does not depend on column j. The laplace/cofactor expansion is stepping through the matrix row-wise, thus this description is misleading.
C_j does not depend on column j, this is correct. C_1 (or any 'C_i' for that matter) is however different for each column (eg. D_j) considered. It seems to me more proper to call this C_j "C_i,j" instead. Both indicating it's independant of row i & column j, but also indicating C's (cofactors) are different between columns, and not identical as previously implied. 145.94.255.226 (talk) 14:18, 5 October 2023 (UTC)
The Laplace/cofactor expansion is stepping through the matrix row-wise
: This is true for Laplace expansion along a row, but Laplace expansion can also be done along a column (this is almost evident since a matrix and its transpose have the same determinant, and transposition exchanges rows and columns). So, the proof is correct. Moreover, the proof does not use Laplace expansion However, I have edited Laplace expansion for clarify this.
However, reading the proof again, I agree that it would be clearer to rename C_i as C_ij, and I'll do that. D.Lazard (talk)
Structure of the proof
editThe structure of the proof, to me, seems to be the following
1) Assume that a solution x exists.
2) Show that a solution, if it exists, satisfies a certain condition.
3) Show that any value satisfying this condition does, in fact, solve the original problem.
I think part 3) of the proof is unnecessarily confusing. I therefore suggest the following structural change:
1) Conclude that since the determinant of A is nonzero, a solution must exist.
2) If a solution exists, it satisfies a certain condition.
3) Done! We've characterized all the solutions.
Additionally, I think the proof is a little confusing to read. I think it could be made simpler if you just simply replace one of the columns of A with the components of b, then write the components as linear combinations of the components of x, and then consider the determinant and simply apply properties of determinants in a straightforward manner. SchwaWolf (talk) 09:41, 9 April 2024 (UTC)
- I agree that the proof could be improved. Your approach for proving that, if a solution exists, then it must have the form specified, is also what I had in mind. What was your idea for proving that a solution must exist? I think it's enough to say that any matrix with nonzero determinant has full column rank, so b can be represented as a linear combination of the columns of A, whose coefficients are given by x. Angegane (talk) 06:24, 22 April 2024 (UTC)
- I agree also that the proof could be improved. But here your proof scheme is far too complicated. The proof consists into only two steps, and only the first one belongs to this article:
- Cramer's rule provides a solution if the determinant is nonzero.
- If the determinant is nonzero, then there is exactly one solution.
- The second part is totally independent from Cramer's rule and belongs to the article Determinant. So the second part must be recalled, linked toDeterminant, but not proved here. The only proof that is useful here is simply that all equations become satisfied, if one replaces the variables with the values given by Cramer's rule. D.Lazard (talk) 09:01, 22 April 2024 (UTC)
- I disagree with your approach because I think it is pedagogically unwise to come up with an x "out of thin air" and show that it satisfies the system of equations, especially when the "honest" proof (the one I suggested) is fairly easy and intuitive. SchwaWolf (talk) 14:38, 22 April 2024 (UTC)