According to some critics, if biology is a kind of reverse engineering for the nature, it is quite poorly prepared for the task. Thus, the issue is more likely with its ontology. Multiple hypotheses and conjectures found in papers on methodological issues claim that living systems should be viewed as complex networks of signal-transmitting paths, both neural and non-neural, that feature modularity and feedback circuits and are prone to emergent properties and increasing complexity. If so, we are on the eve of a new stage in computer models development where not only computers are used to emulate life, but life itself is construed as a complex network of interacting natural computers. In 2002, Yuri Lazebnik used a salient and profound metaphor to clarify the main theoretical shortage that keeps biology from being a unified and deductively consistent science modeled after physics. Asking if a biologist could fix a broken radio, he revealed that what is missing there is a uni-fied formal language for describing ultimate elements of living devices together with their typical combinations, as it is commonly done in radio engineering. I specify in the paper that what Lazebnik means by a “formal language” is not a language of propositions about the world, i.e., of asserting some states of affairs, but rather a language of listing relevant types of objects and their relations. I refer to it as a domain ontology. A theory needs another language to describe actual states of affairs, which most probably shall be mathematical to be able to represent complicated natural structures in their detail. Then I touch on the popular views, according to which a domain ontology is inferred by a theory prop-er. The history of science shows that true theories that are viable today were often paired with now abandoned ontologies, like that of Caloric or Phlogiston. I suggest that a theory does not infer its on-tology, but rather is interpreted thereupon, being inferentially independent of it. I also review some historically important attempt to mathematize the knowledge of life. I mention Alan Turing’s article on morphogenesis where he used some linear differential equations to explain emergence of complexity from homogeneity. Then I briefly touch on works Nicolas Rashevsky whose theories provided inspira-tion to the inventors of artificial neural networks and allowed for abundant use of different mathemati-cal tools by his disciple Robert Rosen in his study of metabolism. Closer to nowadays, various compu-tational theories in biology have emerged. Some of them treat protein combinations as networks of signal-transmitting pathways that can store and process information. Moreover, in unicellular organ-isms, protein-based circuits replace the whole of the nervous system as a behavior-controlling network. Other theories propose a view, in which an organism is construed as a system of modules connected with protocols, of interfaces. A domain ontology like this may considerably simplify the task of scien-tific description. A special attention is paid to applications of the known free-energy (minimization) principle to the life science matters, as it has initially intended to explain issues of cognitive science. In general, within this view, for an organism to survive is to minimize its thermodynamic potential ener-gy, for which purpose the living being as a whole, and all its subsystems, must constantly produce statistical models of environment that are constantly updated with incoming data. Some strong Bayesi-an mathematics combine with this ontology to claim the whole enterprise as the most prominent uni-versal theory of complex developing systems nowadays. As a general output of the survey, I propose a computational methodological approach of doing biology based on the famous Marr’s three-level view on computational systems together with the necessity of identifying elementary nodes, of which living systems are composed. Such an approach may, as I hope, generate a set of competing theories that will eventually help biologists to fix their “radio”.