Programming Course 
05132014, 05:51 PM
I'm trying to learn how to convert a decimal (base 10) floating point, such as 12.2538, to a binary (base 2) floating point. Every time I write out a random number and try to do it, I fail. Is there some sort of condition to it? If you know how to do it and can explain it, I'd appreciate it.
