Well that’s exactly what the ‘value of information’ framework attempts to help researchers / commissioners of research do …
The concept is to take what you already know, with it’s uncertainties: let’s say Zupermab vs. rituximab for treating severe ITP has an OR of death of 0.8 (95% CI 0.55 to 1.05). The 95% crosses 1; Zupermab might make things worse.
How much would be sensible to invest in studies to be clearer as to if Zupermab is better than rituximab?
To do this you need to know what the benefit might be (how much is a life saved worth?) and how much the switch to Zupermab from rituximab would cost (so estimates cost-effectiveness of the proposal). You then plug in your current understanding (based on the 95% CI you have) and come up with a range of possible actual answers, given where the truth might be.
Now propose “if we could tighten up that CI, to show that we were sure – for example – at 0.8 (95% CI 0.7 to 0.9)” – and re calculate what the cost effectiveness of switching would be. The difference between these two is the ‘value of perfect information’.
Balancing this is what you estimate how much resource you’d have to put in to get there – ie “what size study would we need?”. This then generates a sample size, and the research costs can be guessed at.
This cost is then weighed against the benefits that would be gained if undertaken; if it’s outweighing the possible benefit it’s a no-brain rejection of the idea. If it’s not, then it’s a consideration of the risk of pouring the research money (and potential gain) against other projects (and their potential gain)…