Home python Why does numpy calculate matrix determinant incorrectly

Why does numpy calculate matrix determinant incorrectly

Author

Date

Category

I think this way. The determinant must be zero, I checked it on a piece of paper and in online calculators. But this code gives a response like this 5.329070518200744e-15 .
What am I doing wrong? Maybe somewhere I was inattentive, and if not, what is the best way to calculate?


Answer 1, authority 100%

I guess it might depend on Python versions and especially Numpy versions.
In Google Colaboratory exactly 0.0 comes out, even if you print 64 decimal places.
I tried to put a different data type (by default, this matrix turns out to be numpy.int64 ), for example numpy.int16 or numpy.float32 – no difference , it still comes out 0.0 .
But numpy.float16 cannot be set, linalg swears at it that it does not work with it.
But check for fun what kind of data you get in the matrix:

print (type (A [0,0]))

In Google Colaboratory such versions are:

Python 3.6.9
Numpy 1.18.5

The code with which I checked everything:

import numpy as np
A = np.array ([[1, 1, 2, -1],
       [2, -1, 0, -5],
       [-1, -1, 0, -2],
       [6, 3, 4, -3]] #, dtype = np.float32)
      )
print (np .__ version__)
print (type (A [0,0]))
print (np.linalg.det (A))
print (f "{np.linalg.det (A) :. 64f}")

Result:

1.18.5
& lt; class 'numpy.int64' & gt;
0.0
0.000000000000000000000000000000000000000000000000000000000000

Programmers, Start Your Engines!

Why spend time searching for the correct question and then entering your answer when you can find it in a second? That's what CompuTicket is all about! Here you'll find thousands of questions and answers from hundreds of computer languages.

Recent questions