He has specifically stated that he believes the universe has meaning, that consciousness is fundamental, that he thinks there is something like Spinoza's pantheist God in the universe.
He may well have (you would know better than I), but the question then becomes whether this is part of the theory he follows or not. It need not be, of course, and if it is not that doesn't make his belief false. But it is still, I think, important to distinguish what views are part of a theory and those which are unique (or at least individual) to the researcher.
I think Koch and Tononi have stated that conciousness is fundamental because information is only meaningful when something assigns meaning to it.
It's one thing to say that consciousness is fundamental, and another to say it is everywhere. IIT specifically prohibits the latter, as information is "physical" and is not conscious. Only systems which are capable of processing and integrating information to some extent can be considered conscious. Personally, I think that as soon as one states that some intracellular organelle assigns "meaning" simply because according to information theory it is to some extent responding to and integrating information, the term loses any meaning (and if there is any term which should retain its meaning, it's the term "meaning").
More importantly, perhaps, while these definitions do help us formalize notions like consciousness, they don't get us closer to explaining what humans do (or similar mammals). Having defined consciousness as the ability of a system to integrate information, they are then able to use information theory to quantify, to some extent, how "conscious" a system is, but we could do that anyway without the theory. And unless IIT can be shown to bridge the gap between conceptual processing and the unthinking adaption/reaction of most systems, then I think treating consciousness in this gradient way actually creates more problems than it solves. In a way, IIT treats consciousness the way that classical computer science did, only instead of symbolic processing and algorithms we have system complexity and information. We're still left with nothing to explain the distinction between the type of learning a machine or plant is capable of, and that which a human is, along with the conception of this difference as quantitative rather than qualitative.
Koch has stated that qualia are a different property of the universe
In his book
(Quest for Consciouness)
? Unfortunately, I haven't read it.
If according to the theory the causal effects of different neurons create consciousness, and that is information
According to the theory, consciousness is the ability of a system like some neural regions/populations/networks in the brain to receive and integrate information distinct in some sense from the system. Consciousness and information are, according to IIT necessarily distint, as the former is a product of the capacity of a system to react in particular ways to the latter.
I think a good way to test this out would be through optogenetics on animals who have absent seizures. If they can somehow show that the animal is still conscious while not actually feeling a sense of awareness, that would be strong empirical evidence that awareness is not consciousness.
The problem, though, is again how the terms are defined. According to IIT, humans are not conscious, but rather possess numerous parts which are independently conscious. Tononi & Koch make this pretty explicit when they speak about the brain having multiple systems with a phi value, and thus multiple "conscious systems" to the extent that anything with a phi value is conscious. Of course, this makes the whole theory problematic because the point is to be able to quantify consciousness in a way which makes empirical investigation possible while maintaining some useful way of describing the human "mind". In order to avoid the consequence of proposing that humans aren't really composed of multiple conscious systems at the same time, they propose a "main complex", but give no reasons to support this. In effect, then, the theory sort of folds back on itself, at least to the extent that it is useful when it comes to explaining qualia. At any given time, the brain has multiple "complexes" with phi values, so how do these result in the subjective, cohesive, and conscious experience of some sensation (like color)? Apart from the suggestion of a "main complex", we're sort of back at square one: the brain does stuff.