what is science

LostAbaddon
·
·
IPFS
·
I promised a friend to write a popular science article before, but I haven't started it for a long time. I just don't want to work today, so let's write it.
PS: The following is just my personal understanding of science. If there are mistakes or omissions, it is normal. Who dares to say that their cognition must be correct these days?

Science has long since penetrated every aspect of our modern life.

It is as ubiquitous and vital as air, but also as often neglected as air.

So what exactly are we talking about when we talk about "science", "scientific"?


People often deify "science", confuse it with concepts such as "correct" and "truth", and even equate "scientific" with "correct", which is of course wrong strictly speaking— - "Scientific" does not mean "correct", and similarly, "correct" does not necessarily mean "scientific".

To discuss "science", it is necessary to distinguish the connection and difference between it and "non-science", "anti-science" and "pseudo-science", and only then can we look at the relationship between it and "correctness".

In the entry "science and pseudo-science" of the Stanford Encyclopedia of Philosophy (SEP) , the difference between the above four is specified as follows:

  • All systematic knowledge systems that are not scientific are "non-scientific", which can be seen from the prefix "non";
  • And "anti-science" is a narrower concept than "non-science", it specifically refers to the part of non-science that has some form of contradiction and conflict with science;
  • And "pseudoscience" is a narrower concept than "anti-science". It is anti-science pretending to be science , and is mistaken for anti-science based on scientific methods.

What's more, in the theory of the philosopher of science Hansson, there are two criteria for judging whether a knowledge system is pseudoscience:

  1. it's not science
  2. Its main proponents try to create the illusion that it is science.

It can be seen that "pseudoscience" is not only not science (so it is non-scientific), it even has a certain degree of conflict and contradiction with science (so it is anti-scientific), and more importantly, it is also "camouflaged" into a science.

So, what exactly is "science"? If "science" means "correct", then non-scientific is naturally "incorrect" - according to the idea of multi-valued logic, non-science includes "wrong", "undecidable right or wrong" and "self-contradictory" "These three possible states.

However, although we have not discussed what science is for the time being, we obviously know that philosophy, art and craft are all non-scientific. Are they either wrong or undecidable or contradictory? Obviously not.

So, from logic alone, we know that "correct" does not mean "scientific", at least there is a large class of "correct" objects that are not "scientific".

So, the question below focuses on what exactly is "science" - we seem to be back to square one.


In the field of philosophy of science, there is a lot of controversy about science, especially scientific realism. It can even be said that every philosopher of science has his own definition of scientific realism in mind.

Here, people have to think of Wittgenstein again. Of course, this is an aside and we will not list it.

We will not discuss the various schools of definition about what science is, but simply give the following definition of science:

It is based on a measurable phenomenon to explore the underlying laws behind it, and can give testable predictions for phenomena that have not been measured, and the results of the prediction can be within the tolerance range of errors recognized by the scientific community and the new phenomenon. The measurement results are consistent, and the logical system on which the predictions are based is self-consistent, such a family of theories, and the intellectual activities around this family of theories.

This set of definitions contains a lot of qualifications, for example, the object of scientific research is measurable phenomena, so for example, "there is a God whose existence you can't measure the effects of its existence." , it is not within the scope of scientific research, and similar things such as "the color of a pink transparent unicorn" feel mysterious and mysterious at first glance.

The purpose of science is also clear, that is, to explore the laws behind it, which means that we must assume that there are some forms of laws. Purely random behavior is itself a law, that is, the prior distribution over all possible states is the same, and this nonsense itself represents a law. This completely irregular existence, we may consider Azathoth...

The means of expression of science is to give testable predictions of new phenomena, which actually reflects Karl Popper's falsifiability requirements, but more specifically, it points out that "proof" and "falsification" The definition depends on the error range allowed by the scientific community (of course, more often it should be said to be the disciplinary community). For example, for high-energy physics experiments, the so-called "5σ standard" is often taken, that is, the phenomenon is completely caused by errors. The probability is not greater than 1 in 3.5 million. But if such a standard is put into economics, it is obvious that all economic theories cannot be read, so different disciplines have different standards, so the prefix "recognized by the discipline community" must be added. Obviously, this means that the identification of people here is already very important, and whether it is scientific or not is not a simple matter that can be clearly explained by doing an experiment.

After that, it is the internal deduction method of science, which is based on a self-consistent logical system, so we can eliminate some nonsense that is obviously not logical.

Finally, we should also say that all human activities around the things given above are collectively called science.

It can be seen from this that the word "science" actually includes a set of methodology, a set of evaluation systems, and a set of at least logically self-consistent theoretical systems.

At the same time, it is also easy to understand why philosophy and art are non-scientific, because the objects they study are not measurable, so although philosophy should be logically self-consistent, it is obviously not science - of course, " Not science" itself is not derogatory in any way, but it seems to be a derogatory term in everyday language.

Art also has its own logic. In 150 Years of Modern Art, the author Will Gompez described in detail to us the inner creative logic of art schools in different periods. Although this logic is not mathematical It is not a formal logic, but it has its own inherent consistency.

Similarly, religion and theology also have their own logic systems. Although they are obviously not formal logic or mathematical logic, this logic system is self-consistent. In fact, causality in Buddhism is a unique logic system under the Buddhist system, so we can say that religion and theology are sciences, but we can’t say that they do not talk about logic—yes, “doing not talk about logic” is a better way than “doing not talk about logic”. "Science" is a more precise or more "narrow" descriptor, which specifically points out a specific category of "non-science", that is, a system of knowledge that cannot be logically self-consistent.

Art is also a kind of non-science, but it is very closely related to science, but it is still not science, because the purpose of art is not to "explore the essential laws behind phenomena".

Another thing that is very closely related to science is mathematics. Most scientific theories use mathematics as their description language, but mathematics itself is not science, because the research object of mathematics is not measurable phenomena. Of course, we can say that mathematical objects can correspond to many measurable objects. For example, one apple plus one apple is equal to two apples, and the diameter of a manhole cover is 0.7 meters, but these are not mathematical objects themselves, but models of mathematical objects. (The model here refers to a model in the sense of model theory). Mathematics does not directly study phenomena. Its research objects cover all formal systems that can establish self-consistent logic. Sometimes they don’t even care much about the objects contained in the system, but care more about the whole system. properties, such as category theory. Therefore, mathematics also belongs to "non-science" - it can be seen that the word "non-science" is not lower than "science" at all, and sometimes even higher than "science".

But there are also some non-scientific theories that are really low. For example, don't ask why they are the superstition theory of belief. They may be logically self-consistent, although this is often not guaranteed. More importantly, their evaluation system It is directly contradictory to science: science is concerned with the fit between the phenomenon, that is, the theory must be sufficiently consistent with the phenomenon described by the theory, but this is not required in the superstition theory, even if the difference between the theory and the phenomenon is very large , but it can still be accepted, because it is more important to believe than to be accurate. So in these systems, the evaluation criteria are unscientific and irreconcilable with science.

The evaluation system and science of technology and mathematics are not exactly the same, but they can be integrated with each other. For example, mathematics requires logical self-consistency, which can of course be included in science. The evaluation systems of philosophy and art are also completely different from science, but they still do not conflict. For example, the evaluation standard of art is whether it is beautiful or not, and the concept of "beauty" is not within the scope of scientific discussion. No matter how to evaluate it, it is impossible to conflict and contradict with science.

But superstition is different. The theoretical system of superstition often requires the evaluation of "correctness" of some real things. Although "correctness" is not a scientific decision (this will be pointed out later), but It is science that also judges, so the two may conflict.

For example, whether telepathy exists, this can be verified through experiments, so as to give an assessment of whether telepathy has really occurred, but superstitious people will say that if you haven't detected it, it doesn't mean it doesn't exist, just believe it exists. There must be such a judgment, which obviously conflicts with science - we don't care who is right or who is wrong. The key here is: this judgment system is in conflict with science, so superstition is a kind of "anti-science".

Of course, anti-science is not necessarily wrong, just like science is not necessarily right.

why?

Science, in essence, is only responsible for telling you that there is a generally accepted error between the conclusions it gives and the phenomenon it considers, but this does not guarantee that it must be correct, because there are still errors after all. A good example is a big oolong at CERN in 2011, they announced the discovery of superluminal neutrinos with a confidence of 6 σ (that is, the probability of not being credible is less than one in a billion). When everyone was excited, several subsequent teams announced that they could not reproduce the result, and the final investigation found that the initial superluminal phenomenon was probably caused by a detector failure.

It can be seen that all the operations and analysis of CERN were scientific before the cause was finally found to be the detector failure, but the results were not correct.

For example, after Einstein's general theory of relativity was proposed, Eddington used a solar eclipse to experimentally verify Einstein's predictions, and finally declared that the theory was consistent with the experimental results. But in fact, later generations analyzed Eddington's experimental data and found that he had eliminated some "inappropriate" data, whether intentionally or unintentionally, and thus "created" the predictions of general relativity that were consistent with the solar eclipse observations. a conclusion. From the point of view of modern science, Eddington is a falsification of experimental data, and the use of complete original data seems to be able to give the results of general relativity and solar eclipse observations are not good enough, so the theory is wrong. But in fact, a large number of follow-up experiments, especially high-precision experiments in recent years, have proved that general relativity is correct almost without exception.

These two examples tell us: the error within the allowable range does not mean that it is correct, and the error outside the allowable range does not mean that it is definitely wrong.

Therefore, if only from the perspective of right and wrong, then "scientific" means neither "right" nor "wrong", it belongs to "undecidable right or wrong". Therefore, "science" only means that its theory and phenomenon are in good enough agreement, but good enough agreement does not mean identical, right?

Then, we can logically say this: the degree of agreement between anti-scientific theories and phenomena is often not as good as that of science (of course, there are also some special anti-science, which can give theories that agree very well, such as "this It is the best arrangement/God's test", etc., all forms of error are explained as reasonable, then the applicability of such a theory is obvious), but a bad match does not mean that it is necessarily wrong, right?

If subdivided here, we also need to analyze what the source of the error is. For example, if we think of a theory as a mapper that maps inputs to outputs, there are at least four sources of error:

  1. Input data errors caused by factors such as measurement
  2. Errors in output results caused by factors such as measurement
  3. Error between theory and target law
  4. The true randomness inherent in the law itself

The first two errors theoretically do not belong to a set of theories themselves, but they are unavoidable and indistinguishable. The last error is a characteristic of the law itself, and it is unavoidable and indistinguishable. The third one is whether the theory itself is "correct" enough, but unfortunately it is indistinguishable from the other three errors.

This unavoidable and indistinguishable characteristic determines that no matter how well a theory fits a phenomenon, it may be wrong, and no matter how poorly a theory fits a phenomenon, it may still be correct .

In this regard, it's not entirely unreasonable for the philosopher Nancy Cartwright to view science as a hoax, though it's still kind of nonsense.

In fact, the important connotation of the word "science" is not whether it is correct, but that it has a dynamic character, that is, as the phenomenon continues to accumulate, in principle, science will present a self-correcting Dynamic properties, because if the theory is wrong, the degree of agreement between it and the phenomenon decreases, and science will deny itself, which requires it to find new theories.

That is to say, the greatest feature of "science" is that it allows the entire system to continuously adjust itself in the process of accumulation of phenomena, so as to develop in the direction that best matches the phenomena .

Of course we don't know whether the scientific system that fits the phenomenon best is correct, but we at least know that in the range that has been verified, science is trustworthy because it fits best; Phenomenally, science is the most trustworthy.

But it must be emphasized here: this is just a belief of everyone, not necessarily a reality.

In order to explain this problem, we can regard science, especially a scientific theory, and the essential laws behind its corresponding phenomenon as a "mapping machine", which can map the input set of data (the environment that constitutes the phenomenon) and state parameters) onto a set of oracles (measurements of phenomena). Then, both the input and output data can obviously be represented as real numbers, and if we agree that there are only a finite set of inputs and a finite set of outputs, then such a mapper is the mapping of the real number R to the real number R itself, which can be regarded as A curve defined in R (not required to be continuous). But the number of experiments we humans can do is limited, so we actually only know the output corresponding to a countable number of real numbers. Well, there is 1 Alev point on the curve, and there are only a limited number of points that can be verified by experiments, and at most there are only 0 Alev points. Therefore, the proportion of phenomena that can be verified by experiments is 0 among all possible phenomena.

Mathematically, if the curve meets a certain continuity requirement, then we can of course deduce the curve when we know the values of the countable points, but the problem is that the curve corresponding to the current phenomenon does not need to meet any continuity requirement , therefore, inferring infinite consequences from the finite can never be fully believed.

This is the essence of Hume 's dilemma: no matter how many times we verify, no matter how much data we get, it is impossible to exhaust all possibilities, so how can we believe in science?

From a Bayesian standpoint, more tests certainly mean more credible science, but obviously not all philosophers believe in Bayesian inference.

But at least we can confirm one thing: science doesn't always mean right, it just means that even a future that is wrong now has the potential to correct itself.

This is not uncommon in the history of the development of science. We have a large number of wrong theories emerging in the long river of time, and then they are considered to be the truth. theory.

When such a degree of self-correction and self-renewal reaches a certain scale, the entire scientific building will undergo a major change called " paradigm shift " by Thomas Kuhn, which carries out conceptual shift from the most basic conceptual level. , and carry out a comprehensive renovation in terms of methodology, evaluation system, propositional structure, etc. For example, from Newtonian mechanics to relativity, from classical physics to quantum physics, from point particle physics to string theory (whether this can be called a paradigm shift is still a philosophical debate), and so on.

It can be said that the self-correction ability of science is based on its evaluation system, that is, the theory must be consistent with the phenomenon, and the latter is a process that can be dynamically accumulated, then the former must also be dynamically adjusted, and the two It is possible for them to remain consistent at all times.

However, this judging system itself is not a naturally existing thing-in-itself. It is not given by God, but created by man. Therefore, the reliability of this judging system is worth questioning, especially when it changes, and this This change is not in a stricter but a looser direction.

For example, in recent years, with the continuous push of high-energy physics to higher and higher energy fields, it is impossible for human laboratories to satisfy the brain holes of increasingly unrestrained theoretical physicists. Necessity, some physicists have questioned.

In their view, a theory is scientific as long as it is mathematically self-consistent. But we all know that the object to be studied by mathematics is an object in a much broader abstract field than nature. Its self-consistency can only indicate the validity of mathematics, but it cannot mean that it really describes the place we are in. The natural world - we can completely establish a logically self-consistent formal system for Yin Yang, Five Elements and Doctor Strange, but its connection with the world we live in is only in which comic book it is drawn on.

This is already denying the traditional evaluation system of science. After all, if science no longer needs to be linked with reality (that is, phenomena), then what is the difference between it and mathematics, or even between it and philosophy?


Finally, a topic of idle talk: Can Occam's Razor become the standard for screening scientific theories?

From the perspective of algorithmic information theory, if two Turing machines are functionally equivalent, that is, for any input X, the outputs given by Turing machines A and B are always the same, then the power of random generation is Under the background of science, the K-complexity, that is, the Turing machine with the lower incompressible length, the greater the probability of its random emergence. In fact, according to Levin's coding theorem, the proportion of Turing machines in the ensemble is equal to the logarithm of its incompressible length, plus a language-dependent upper and lower bound that is independent of the Turing machine itself.

From this point of view, the meaning of Occam's razor should be expressed as follows: in the case of identical functions, the simpler the law, the higher the probability of occurrence, but whether it is more correct is known to God.

Therefore, Occam's razor is essentially the same as science itself, and it is irrelevant to whether it is right or wrong, so using it to screen scientific theories is just a matter of belief.

CC BY-NC-ND 2.0

Like my work? Don't forget to support and clap, let me know that you are with me on the road of creation. Keep this enthusiasm together!

LostAbaddon文章即魂器 Twitter:https://twitter.com/LostAbaddon NeoDB:https://neodb.social/users/LostAbaddon@m.cmx.im/ 长毛象:@LostAbaddon@m.cmx.im 个人网站:https://lostabaddon.github.io/
  • Author
  • More

关于机械决定论的随想

简单看下我国目前科普图书市场情况

当侏罗纪从公园走向世界,它也从世界走向公园