They don't. Many theories in the social sciences, for example, aren't concerned with "material" phenomena and thus can be (and indeed must be) formulated independently of any assumptions about whether they are reducible to material explanations (i.e., whether or not economic booms or recessions can in principle be explained in terms of the dynamics of sub-atomic constituents). In fact, one of the most successful theories in physics (statistical mechanics) is explicitly immaterialistic in that it deals with systems that are mathematical idealizations of physical systems. Quantum mechanics, at least canonically, is even worse: a quantum system is a purely mathematical entity and quantum mechanics irreducibly statistical (the "systems" it describes aren't real but are vectors in an infinite-dimensional complex space that are "observed" by mathematical functions called Hermitian operators in order to relate particular experimental designs to measurements).
Of course, materialism is mostly a philosophical view, and we don't generally even use the term or consider the issues, as (unfortunately) too many of us have neither familiarity with nor training in philosophy or metaphysics, leaving such issues mostly to those with a scientific background but who are philosophers of science (or worse, philosophers without scientific backgrounds). For example:
"materialism is waning in a number of significant respects—one of which is the ever-growing number of major philosophers who reject materialism or at least have strong sympathies with anti-materialist views. It is of course commonly thought that over the course of the last sixty or so years materialism achieved hegemony in academic philosophy, and this is no doubt right by certain measures—for example, in absolute number of self-identified materialist philosophers of mind or in absolute number of books and journal articles defending materialism. It is therefore surprising that an examination of the major philosophers active in this period reveals that a majority, or something approaching a majority, either rejected materialism or had serious and specific doubts about its ultimate viability. The following is just a partial sampling of these philosophers, more or less in order of birth.
Bertrand Russell, Rudolf Carnap, Alonzo Church, Kurt Gödel, Nelson Goodman, Paul Grice, Stuart Hampshire, Roderick Chisholm, Benson Mates, Peter Strawson, Hilary Putnam, John Searle, Jerrold Katz, Alvin Plantinga, Charles Parsons, Jaegwon Kim, George Myro, Thomas Nagel, Robert Adams, Hugh Mellor, Saul Kripke, Eli Hirsch, Ernest Sosa, Stephen Schiffer, Bas van Fraassen, John McDowell, Peter Unger, Derek Parfit, Crispin Wright, Laurence BonJour, Michael Jubien, Nancy Cartwright, Bob Hale, Kit Fine, Tyler Burge, Terence Horgan, Colin McGinn, Robert Brandom, Nathan Salmon, Joseph Levine, Timothy Williamson, Mark Johnston, Paul Boghossian, Stephen Yablo, Joseph Almog, Keith DeRose, Tim Crane, John Hawthorne, Richard Heck, David Chalmers."
from the editors' introduction to Coons, R. C., & Bealer, G. (Eds.) (2010). The Waning of Materialism. Oxford University Press.
True. For example, about the closest we can get in the sciences to “proof” is found in Bell’s inequality, which “proves” that if we find particular correlations between space-like separated measurements of systems than the only explanation is nonlocality. This, of course, isn’t true: one can abandon realism instead, but as that would be an abandonment of an assumption to all scientific theories very few opt for this (none that I know of are physicists). The problem is that nonlocality entails some "cause" behind the dynamics of systems that isn't "materialistic". Newtonian gravitation was nonlocal, but at least it was "something" (i.e., it was an effect which was so constant in its influence on mechanics that all motion could be explained in terms of this force as a singular effect). Nonlocality isn't a force, but a surprising and seemingly paradoxical feature of the universe that has no singular generalization the way Newtonian gravitation did. It can't be used to explain the dynamics of systems (Bell's inequality, for example, was derived from the mathematics of quantum mechanics, and first violated some 20 years later by Aspect et al.). It isn't even clear how to approach describing it (nonlocality is sometimes considered to be superluminal effects/processes, instead of effects/processes that occur in "no-time"). But there is no "material" explanation for it, and indeed it violates classical causation.
Classical physics, including the "law of gravity", is wrong. It is true that our incomplete knowledge of physics hasn't provided us with an adequate replacement, as gravitation per se doesn't exist in general relativity but we have haven't been able to incorporate GR into quantum physics. However, regardless of this lack of an adequate replacement, the "law of gravity" is still just plain wrong. It is not consistent with any theory of modern physics, as it predicts that every electron in the universe would plummet into the nuclei each orbits in an instant. Thus every second atoms continue to exist provide as many counter-examples to the "law of gravity" as there are atoms in the universe.
...that I linked to a free version of a peer-reviewed paper. Would you like more? A pure assertion is making so inane and ill-informed statement about complex systems reducing to algorithms as you did (which contradicts the foundations of relational biology) without evidence (and then thinking that producing a bunch of popular science garbage or other popular, non-technical literature is somehow indicative of anything other than an ability to use internet search engines. Even those scientists in relevant fields who believe that life and evolutionary processes can be reduced to computable models (algorithms are by definition computable) are well-aware that nothing like this is remotely closer to being shown.
No, they haven't. Not in the technical sense, as we remain unable to create complete models of single cells (which, as Robert Rosen and followers argued, is impossible; they are [M,R]-systems and closed to efficient causation).
No, they aren't. Computational neuroscience (part of what I do) involves models of neurons and neuronal networks, and in fact entire software environments like NEURON exist solely for creating neural models and whole fields in HCI involve the creation of Neuromorphic systems, BCIs, etc. But all these are MODELS or involve interfaces that are neither models, nor simulations, and tell us relatively little about the brain.
“We have demonstrated, for the first time to our knowledge, that computations performed and shaped by the dynamics of charges are radically different than computations in digital computers.”
Aur, D., & Jog, M. S. (2010). Neuroelectrodynamics: Understanding the Brain Language (Biomedical and Health Research Vol. 74). IOS Press.
Louie, A. H. (2005). Any material realization of the (M, R)-systems must have noncomputable models. Journal of integrative neuroscience, 4(04), 423-436.
"while leading computationalists have shown considerable ingenuity in elaborating and defending the conception of minds as computers, they have not always been attentive to the study of thought processes themselves. Their underlying attitude has been that no theoretical alternative is possible...The essays collected here are intended to demonstrate that this attitude is no longer justified."
Fetzer, J. H. (2001). Computers and cognition: Why minds are not machines (Studies in Cognitive Systems Vol. 25). Springer.
“The brain is not a computer, nor is the world an unambiguous piece of tape defining an effective procedure and constituting “symbolic information.” Such a selectional brain system is endlessly more responsive and plastic than a coded system.”
Edelman, G. M. (1999). Building a Picture of the Brain. Annals of the New York Academy of Sciences, 882(1), 68-89.
“no formal system is able to generate anything even remotely mind-like. The asymmetry between the brain and the computer is complete, all comparisons are flawed, and the idea of a computer-generated consciousness is nonsense.”
Torey, Z. (2009). The crucible of consciousness: An integrated theory of mind and brain. Cambridge: MIT press.
“To understand why neurons and computers are fundamentally different, we must bear in mind that modern computers are algorithmic, whereas the brain and neurons are not.”
Tse, P. (2013). The neural basis of free will: Criterial causation. Mit Press.
“The free will theorem supports a powerful challenge to the scientific credentials of determinism, by showing, on certain well-supported assumptions, that two cornerstones of contemporary science, namely (1) acceptance of the scientific method as a reliable way of finding out about the world, and (2) relativity theory’s exclusion of faster-than-light transmission of information, taken together, conflict with determinism in both its versions. Belief in determinism may thus come to be seen as notably unscientific.”
Hodgson, D. (2012). Rationality + Consciousness = Free Will (Philosophy of Mind). Oxford University Press.
“The view that the brain does not compute Turing-computable-functions is still a form of wide mechanism in Copeland’s sense, but it is more encompassing than Copeland’s, because it includes both Copeland’s hypercomputationalism and the view that mental capacities are not explained by neural computations but by neural processes that are not computational. Perhaps brains are simply not computing mechanisms but some other kinds of mechanisms. This view fits well with contemporary theoretical neuroscience, where much of the most rigorous and sophisticated work assigns no explanatory role to computation”
Piccinini, G. (2007). Computationalism, the Church–Turing thesis, and the Church–Turing fallacy. Synthese, 154(1), 97-120.
“Referring to the ‘widespread belief ... in many scientific circles ... that the brain is a computer,’ neurobiologist Gerald Edelman (2006) insists that ‘this belief is mistaken,’ for a number of reasons, principal among which are that ‘the brain does not operate by logical rules’ (p. 21). Jerome Bruner (1996), a founder of cognitive science itself, yet, coincidentally, a key figure in the emergence of narrative psychology, challenges the ability of ‘informationprocessing’ to account for ‘the messy, ambiguous, and context-sensitive processes of meaning-making’ (p. 5). Psychologist Daniel Goleman (1995), author of the popular book Emotional Intelligence, asserts that cognitive scientists have been so ‘seduced by the computer as the operative model of mind’ (pp. 40f.) that they have forgotten that, ‘in reality, the brain’s wetware is awash in a messy, pulsating puddle of neurochemicals’ (p. 40f.) which is ‘nothing like the sanitized, orderly silicon that has spawned the guiding metaphor for mind’ (pp. 40–41).”
Randall, W. L. (2007). From Computer to Compost: Rethinking Our Metaphors for Memory. Theory & psychology, 17(5), 611-633.
“Semantic ambiguity exists in real-world processes of life and mind...Thus, it is feasible to rationally investigate a real-world semantic process, such as the interaction between synaptic communication and NDN, by placing the process into a modeling relation with an impredicative model, such as a hyperset process, and learn novel (albeit qualitative rather than quantitative) things about the real-world process by asking questions about the model.
What is not feasible is serious investigation of such processes by algorithmic computation. Algorithms disallow internal semantics, and specifically prohibit ambiguity. In other words, in a fundamental manner, the entailment structures of algorithms differ from the entailment structures of processes of life and mind. Thus, algorithmic descriptions of such processes are superficial, capturing the incidental syntax but not the essential semantics...
No computer program, no matter how cleverly designed, has an entailment structure like a mind, or even a prion.”
Kercel, S. W. (2003, June). Softer than soft computing. In Soft Computing in Industrial Applications, 2003. SMCia/03. Proceedings of the 2003 IEEE International Workshop on (pp. 27-32). IEEE.
“Today’s programs—at best—solve specific problems. Where humans have broad and flexible capabilities, computers do not.
Perhaps we’ve been going about it in the wrong way. For 50 years, computer scientists have been trying to make computers intelligent while mostly ignoring the one thing that is intelligent: the human brain. Even so-called neural network programming techniques take as their starting point a highly simplistic view of how the brain operates.”
Hawkins, J. (2007). Why Can't a Computer be more Like a Brain?. Spectrum, IEEE, 44(4), 21-26.
“there is no evidence for a computer program consisting of effective procedures that would control a brain’s input, output, and behavior. Artificial intelligence doesn’t work in real brains. There is no logic and no precise clock governing the outputs of our brains no matter how regular they may appear.”
Edelman, G. M. (2006). Second nature: Brain science and human knowledge. Yale University Press.
"the brain is not a computer, yet it manipulates information...while von Neumann and others invented computers with mimicking the brain in mind (von Neumann 1958), the brain does not appear to behave as a Turing Machine "
Danchin, A. (2009). Information of the chassis and information of the program in synthetic cells. Systems and synthetic biology, 3(1-4), 125-134.
“Determinism would be the crucial issue if the early modern atomist–reductionist picture were true. That is, if the causal capacities of complex entities were nothing but the combined causal effects if the entities’ constituents, and if the most basic constituents operated according to deterministic laws, then it would indeed seem to be the case that humans could do nothing other than what their atoms, in aggregate, do…
We have argued that this picture is wrong on three counts. First, it is widely accepted that the ‘‘atoms’’ (in the philosophical sense) do not behave deterministically. Second, it is becoming more and more widely recognized that complex dynamical systems can exhibit new sorts of causal capacities not found at the level of their constituents. We have emphasized, among these, sentience, goal seeking, consciousness, acting for a reason, and self-evaluation. Third, we have argued that higher-level systems exert downward effects on their constituents via selection among possibilities generated randomly, probabilistically, or according to deterministic lower-level laws.”
Murphy, N., Brown, W.S (2007). Did My Neurons Make Me Do It?: Philosophical and Neurobiological Perspectives on Moral Responsibility and Free Will. Oxford University Press.