Really now? I would love to see that process of eliminating of all possible attempts to describe an objective morality being reduced to a single one.
(No I am not being sarcastic - I would love to see it done - I just do not believe it can be)
It's easiest to explain if you first take morality in the context of that which is relevant to action- not simply belief for belief's sake which has no bearing on real action.
That is, a genuine functional goal.
Without that, it's a little tricky.
Once we reach that point, it's easy to dismiss all moral systems that are illogical or unscientific on the grounds that the world views they represent are less probable (and so their consequences relative to a goal can be self-contradictory, or unpredictable in the context of a real reality, and they are immoral beliefs on the grounds of their moral improbability and inconsistency).
That eliminates all claims to legitimacy of revealed religions and spiritual gnosis- relegating those to random/arbitrary sources. Being the most common historical grounds for elaborate moral systems, and still practiced by the majority, eliminating these disqualifies the majority of world opinions on morality.
What is left over is that which can be derived from logic.
One then dismisses innate goals, because acceptance of those as morality would make the term meaningless (as mentioned before).
Things like Satanism and Objectivism are out. That also disqualifies social and instinctive moral drives (hormonally based altruism, which has its own gratification and is about as relevant as sleeping or eating to philosophical morality), as well as social contract and game theory, as representing the source of objective morality (though those things definitely come into play in functional application of moral goals).
These are principles currently levied as the most rational forms of naturalistic morality by their advocates- separating them as an essential categorical division and eliminating them from consideration is essential (removing the bulk of serious competition).
Eliminating the innate or selfish goals, we're left with the other side of the self/selfless dichotomy.
Selection of any arbitrary "selfless" goal would be selfish (based on personal preference)- e.g. we can not simply choose a goal out of the blue because that choice reflects selfish desires and innate tendency, which would make morality irrelevant (such arbitrary goals fall into the former category as opinions).
This is an important categorical division among all possible goals; most possible goals being arbitrary in this respect, and so being invalidated.
Revealed morality being out already, we have to narrow it down to the science of the matter.
Rather than consideration for one's self, it is consideration for something that is not.
The only thing that is not oneself that possesses a concept capable of being considered (an interest) is another intelligent adaptive information system.
One is forced to narrow the consideration to interests that are capable of being relevant, as compared to non-interest.
e.g. We can't consider the non-interest of a rock to be painted purple as a moral goal, because no such interest exists in reality, so such an interest would be an arbitrary and self serving creation of our own imaginations (thus disqualified as having a selfish origin).
This point narrows things down massively- there are a finite number of real interests, and a virtually infinite number of unreal/imaginary interests.
We can only consider the interest of things that have interests; but in order to be non arbitrary, we must consider the interests of everything that has interests to the extent those things have said interests (anything else would be an inconsistent personal bias).
Moral relevance relative to the entity's consciousness/intelligence, in a sense- a pig has more acute and elaborate interests, and more capacity to hold them, than a house fly, for example. We have to look here hard at cognition. Adaptive neural networks in a computer simulation can be considered similarly, as can even physical evolutionary forces.
In the balance, one derives something not terribly unlike utilitarianism, but with regards to the interest of a being that can possess them as opposed to a measure of mere pleasure or pain- which may approximate those interests, but does not represent the totality of them (e.g. interests can extend beyond one's lifespan).
Interests have clear exchange rates in terms of sacrifice and willingness to experience pain to see them through; Nietzsche articulated this to some degree in his formulation of the "will to power", but the influences here are manifold, and complex enough that with our current knowledge of cognition we can only approximate them (though approximate them we should, as the greater good to ignoring them- e.g. striving closer to that goal of consideration than would be a less accurate model).
Naturally one makes use of science and logic through empirical observation, game theory, etc. to maximize efficiency in striving for one's goal (I think that goes unsaid), so there's plenty more I could go into as to execution, but the point was only to elucidate a non-arbitrary consistent and coherent goal at the exclusion of its alternatives.
I'm assuming morality is some kind of behaviorally relevant methodology of positive consideration for some goal.
Barring that, we might be able to say, rather than consideration for the other beyond oneself, morality could be complete lack of consideration for anything (not even oneself)- but that is something of a non-goal rather than a goal (e.g. due to the way the mind works via motivation, it does not result in a non-random behavior/methodology which can be practiced, and is functionally suicidal).
That potential definition is ruled out semantically, if not also logically (a goal to consider nothing must consider itself, thus negating its purpose). That gets into ontologically tricky, "This statement is false" territory.