Skip to content

Conversation

@astrofrog
Copy link
Member

I was using astropy.units to check some answers in a Physics tutorial, and one thing I tried instinctively was something like:

>>> a = 150 * u.nm
>>> d = 1.6e-7 * u.m
>>> P = np.exp(-a / d)

of course, the real answer is something like 2.55, because a / d = 0.93, but by default, float just uses the value of the quantity, and:

>>> a / d
<Quantity 937500000.0 nm / (m)>
>>> (a / d).value
937500000.0

so P ends up being zero, which is bad. I'm not exactly sure how we should deal with cases like this - should Quantities always check if they are actually dimensionless, and if so, automagically simplify themselves? So then we'd have:

>>> a / d
<Quantity 0.9375000000000001 1.000000e+00 >

but then, sometimes people might actually want to keep a Quantity in e.g. m/kpc, so this doesn't seem like a good solution.

This makes me wonder whether maybe it is too dangerous to have automatic float conversion work for everything, and whether we should make it so automatic float conversion only works for dimensionless quantities? (with an exception raised if not). It does mean that for a value in m/kpc, q.value will be different from float(q), but maybe that's ok since then they have different purposes?

cc @mdboom @eteq @iguananaut @taldcroft

@astrofrog
Copy link
Member Author

The more I think about it, the more I think it only makes sense for float/int etc to work for dimensionless quantities. The attached code leads to the following behavior, which I think it clearer and less ambiguous:

In [1]: import numpy as np

In [2]: from astropy import units as u

In [3]: a = 150 * u.nm

In [4]: d = 1.6e-7 * u.m

In [5]: float(a)
ERROR: TypeError: Only dimensionless quantities can be converted to Python scalars. Use the `value` attribute to access the value of the quantity in the current units. [astropy.units.quantity]
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-5-93d25633ffc4> in <module>()
----> 1 float(a)

/Users/tom/Library/Python/3.2/lib/python/site-packages/astropy-0.3.dev2891-py3.2-macosx-10.6-x86_64.egg/astropy/units/quantity.py in __float__(self)
    375             raise TypeError('Only scalar quantities can be converted to Python scalars')
    376         if not self.unit.is_dimensionless():
--> 377             raise TypeError('Only dimensionless quantities can be converted to Python scalars. Use the `value` attribute to access the value of the quantity in the current units.')
    378         return float(self.decomposed_unit.value)
    379 

TypeError: Only dimensionless quantities can be converted to Python scalars. Use the `value` attribute to access the value of the quantity in the current units.

In [6]: float(a / d)
Out[6]: 0.9375000000000001

In [7]: np.exp(a / d)
Out[7]: 2.5535894580629273

Of course, the tests will fail and the docs need updating, but I'd be interested in getting feedback on this.

@embray
Copy link
Member

embray commented Jan 24, 2013

This looks like what you want:

In [3]: a = 150 * u.nm

In [4]: d = 1.6e-7 * u.m

In [5]: np.exp(float((a / d).decomposed_unit))
Out[5]: 2.5535894580629273

@mdboom might know better, but I think this was the compromise between wanting to keep dimensionless quantities like m/kpc intact by default, but still have an easy way to decompose that.

I wouldn't place high priority on it for now, but it might be nice if it didn't do this just in the case where you have the same unit just with different prefixes (such as your case with nm and m) but don't automatically decompose if you're mixing two different units like m and pc (even if they represent the same physical unit).

@embray
Copy link
Member

embray commented Jan 24, 2013

(Note: The explicit float() in the above example isn't necessary; I just put it there for illustration purposes.)

@eteq
Copy link
Member

eteq commented Jan 24, 2013

I think I agree with @astrofrog here. The code @iguananaut shows is indeed the right way as it works right now, but I agree it's counter-intuitive (to me) that the example didn't work. It's unique to the case where a value is dimensionless, because, when you write out the math, normally you don't worry about units for dimensionless quantities. So we have to be extra-careful, because no one even expects units to matter at all in that situation.

I'm ok with the solution presented here because it just makes people use value more. I think there were others who thought it was more important to preserve the float() and int() behavior? I'm not sure who, though...

Another possibility might be to chang the exp, log, and log10 ufunc behavior so that when they're given a dimensionless Quantity, they automatically decompose, because in that context it doesn't really make physical sense to do anything else. That seems a bit too magic to me, so I prefer @astrofrog's solution, but it might be a middle ground...

@embray
Copy link
Member

embray commented Jan 24, 2013

I think this is taking away too much functionality that I would expect to "just work". The entire point of the __float__ and friends was that I could pass a Quantity to a function that doesn't know about Quantities and it will just work. For example if I have some value is 'm/s', and I'm passing it to a function that takes a velocity then the automatic value is the correct one (assuming I'm already in whatever units are expected by that function).

Instead, of raising a type error, why not just decompose the units if they're dimensionless, and if not leave them alone? That is,

if self.unit.is_dimensionless():
    return float(self.decomposed_unit)
else:
    return float(self.value)

I think that would cover the two most general cases in a way that's most expected in most cases. It would break, for example, the m/kpc case but that can just be an example of a special case maybe?

@astrofrog
Copy link
Member Author

As @eteq said, I'm indeed only suggesting we change the behavior of the float() etc functions, and am not actually interested in having the quantity automatically simplify itself in the 'dimensionless' case (we agreed before this would be bad).

@iguananaut - I think that what you are suggesting would be confusing, because float() would work both for things that have units like km/s, and things that have units like km/pc, but in the former case, the units would be left unchanged, while they would be decomposed in the latter.

What I'm essentially suggesting is that those functions act more intuitively. One cannot conceptually do np.exp(quantity) unless the quantity is dimensionless, otherwise the units don't get carried through. I'm not actually worried about use cases where users explicitly use float() because that's the same as value, but I'm worried about all the cases where those functions get called implicitly as for exp. So to me, the following should fail:

np.exp(3 * u.m)

because the units can't be converted, and it's a good fail-safe, because actually, in most cases, you're probably doing something wrong if you are taking the exp of something with units (log is a special case where people do do it, but again I think they should be careful about the units the quantity is in rather than trusting the result blindly).

So long story short, I think that it would be much better to only allow implicit float conversion in cases where the quantity has no units, and use the value attribute to access the quantity in the current units.

Note that I think we could try and define a custom behavior for np.sqrt so that in that case it will actually return the same as if the user had done **0.5, and then the result would still have units. Then the behavior would be:

>>> np.exp(1. * u.m)
ERROR telling user to use .value
>>> np.sqrt(1. * u.m**2)
1 * u.m
>>> float(1. * u.m)
ERROR telling user to use .value
>>> np.exp(3 * u.m / (200. * u.cm))
the value

@mdboom @taldcroft - do you have any thoughts on this?

@mdboom
Copy link
Contributor

mdboom commented Jan 25, 2013

I think I'm with @astrofrog on this. I'd further be fine with no implicit float conversion at all, but I think @astrofrog is suggesting a nice middle ground that avoids the large potential for confusion and shooting one's foot.

@embray
Copy link
Member

embray commented Jan 25, 2013

@astrofrog @mdboom Let me try again. This has nothing to do in my mind with what is or isn't correct with respect to units. The entire point of this functionality in the first place was for interoperability of Quantity objects with existing APIs that don't know or care that some number has a "unit" attached to it, which is going to be most. We're not going to monkey patch Numpy for example or any other library to support units.

Say for example I have a bunch of 3-tuples representing velocities and I want to normalize them. With normal ints or floats I can just do this:

In [22]: v = (1, 2, 3)

In [23]: v / np.linalg.norm(v)
Out[23]: array([ 0.26726124,  0.53452248,  0.80178373])

This "just works" because as long as all the elements in a tuple can be converted to an array type Numpy will do that automatically. True you get back an array instead of a new tuple, but that's fine. Maybe I'm going to pass it on to some visualization code anyways.

Now say I'm working with Astropy quantities, which I prefer to because they contain more information and all these things. Right now I can do the exact same thing as in the previous example without any change to the code other than that the tuple is now a tuple of Quantities:

In [35]: from astropy.units import Quantity as Q

In [36]: v = (Q(1, 'm/s'), Q(2, 'm/s'), Q(3, 'm/s'))

In [37]: v / np.linalg.norm(v)
Out[37]: array([ 0.26726124,  0.53452248,  0.80178373])

Sure the returned value loses the unit information (though this will be less of an issue once we can start carrying that around in NDData arrays). So better still, it also works (aside from #679 for which I've implemented a temporary fix for the sake of this example) to do:

In [5]: v = Q((1, 2, 3), 'm/s')

In [6]: v / np.linalg.norm(v)
Out[6]: <Quantity [ 0.26726124  0.53452248  0.80178373] m / (s)>

Now you do get back a Quantity with the units attached which is nice, and this object should continue to "just work" fairly deep into third-party library code since __array__ will continue to work.

The short version of this is that, I think it's a big win to say "You can start using Quantitys in your code Right Now and you don't have to change anything else or even make any special conversions." If we do this then the best we can say is, "Well you can use Quantities with your existing code but only in special cases, the rest of the time you have to do [q.value for q in v] if you want to use your Quantity objects outside Astropy code. In that case you're throwing away information about the units anyways, and it's the coder's responsibility to be sure that those values are in the correct units.

@mdboom
Copy link
Contributor

mdboom commented Jan 25, 2013

I think supporting the second example is fine, because there is only one unit involved:

v = Q((1, 2, 3), 'm/s')

However, what about this:

v = (Q(1, 'm/s'), Q(2, 'm/s'), Q(3, 'km/s'))

If we could, it should convert everything to the same unit before performing the operation. But without monkey-patching Numpy, I don't know how we could, it you're just likely to get confusion. I really think it's better to disallow this. If someone has to say q.value they're understanding more that that the value is just a scale on top of a scale in the unit and not some universal value that can be used with other values.

@embray
Copy link
Member

embray commented Jan 25, 2013

In the latter case is probably a bad example. I think that if one wants to have a collection of quantities like a vector or matrix and ensure that they're all the same unit one should use the quantity array support. I agree that mixing together a bunch of quantities with the same units but different scale factors could be dangerous if done carelessly.

But that's a danger one can get into in either case. If a user is passing a bunch of Quantity objects to a function that's expecting values in m/s and they're carelessly doing v.value without checking the actual scale then they'd run into the same problem if a Quantity comes along that's accidentally in 'km/s' for some reason.

And besides, maybe there are cases where one would want a collection of some coordinates, for example, that have different units.

@embray
Copy link
Member

embray commented Jan 25, 2013

I'd further add, that if this feature were to only work in a special case, I'd argue we should take it out altogether--an implicit special case that only sometimes works is just user-hostile even if it's documented.

@eteq
Copy link
Member

eteq commented Jan 25, 2013

@iguananaut - I agree that it's possible for the user to make the mistakes with .value, but that means they had to actually think about it. I think the main problem with the current behavior is that

a = Q(1, 'm/s')
b = Q(2, 'm/s')
z = Q(3, 'km/s')
#... a bunch of intervening code that does something
v = (a, b, z)
v / np.linalg.norm(v)

Silently works but is almost certainly wrong. That said, I see your point that supporting this for arrays and not floats is problematic, and as @mdboom said, the v = Q((1, 2, 3), 'm/s') case is fine. Is there any way to still give the user clear warning, but still support the scenario?

We could have it be a warning instead of a failure. But I'm slightly against that because it seems like we should just pick one side and accept the consequences (I just don't know which is the right side).

Regardless, though, I think @astrofrog's original case of doing exp(a/b) where a/b is dimensionless cannot keep the current behavior. I guarantee this will become a major source of confusion once we have less savvy users using Quantity because of the nature of typically-enountered astro formulae.

@embray
Copy link
Member

embray commented Jan 25, 2013

I'm not clear on how directly using foo.value represents clear intent on the part of the user in the case that the user has to always do that anyways in order to use some quantity or quantities in existing code that doesn't support them. Is it just that the code will crash otherwise?

@embray
Copy link
Member

embray commented Jan 25, 2013

Actually I'd be totally happy with a warning.

@embray
Copy link
Member

embray commented Jan 25, 2013

Regardless, though, I think @astrofrog's original case of doing exp(a/b) where a/b is dimensionless cannot keep the
current behavior. I guarantee this will become a major source of confusion once we have less savvy users using
Quantity because of the nature of typically-enountered astro formulae.

That's why I suggested in the first place that in the dimensionless case use to .decomposed_unit, which this fix does. But I don't think that means the other case should be invalid. It's really either all or none.

@astrofrog
Copy link
Member Author

I need to think about this more, but one could actually revert to the position held by some originally that .value doesn't really mean much either, and that what we really should have is a value_in(...) method, so that the internal representation of the quantity is not important, because if the user wants to extract the value, they have to explicitly specify the units.

In any case, while of course using decomposed_unit solves my initial issue, there is no way that beginner users will use that or know that they have to. Even as a developer, I got this wrong and it took me a minute or two to figure out what was wrong. So regardless of what we decide, I consider - as @eteq does - the current behavior to be broken, and it's going to be a far more common user-case than taking the np.exp of a **non-**dimensionless quantity.

If we want to be really safe, then we could:

  • only allow implicit float conversion for dimensionless quantities because one can argue that in those cases, nothing is lost (since there are no units anyway).
  • recommend users to specify what units they want a value in, i.e. .value_in(...) and also have value_in_cgs and value_in_si attributes for convenience (or allow .value_in(u.si) and .value_in(u.cgs).
  • still allow the .value attribute, but make it clear to users that this is the value in the unit .unit, so they should be careful with what they expect.

This still allows some pragmatism in that the ideal view would be to get rid of .value altogether, but I think would solve my concerns with the initial issue, and recommend a better practice code-wise, which is to not assume what the value of the quantity is in, but to have to explicitly specify it with value_in, which would lead to more readable code.

@astrofrog
Copy link
Member Author

(of course, if people agreed to switch over entirely to using value_in, and get rid of value, then that's even better, as it removes any ambiguity)

@mdboom
Copy link
Contributor

mdboom commented Jan 25, 2013

@astrofrog: That actually makes a lot of sense. I think "power users" may still want the ability to just "get the !@#$ value", but value_in should be the preferred way.

Note it seems to me that value_in and value_in_cgs/si are fundamentally different things. value_in would return a value in a requested unit, whereas you don't really know what the unit of value_in_si would be, and it could be one of a number of things. I think it's best to spell the latter differently to make that clear.

@embray
Copy link
Member

embray commented Jan 25, 2013

Saying "nothing is lost" in the dimensionless case is not quite right either. What if you really have a use case where you need the value to be a ratio of, say, meters to nanometers ( a trivial case since that's just a base-10 scale factor, but less trivial if we're talking say lightyears per kpc ).

Now granted I'm not the intended audience so I don't claim to know right from wrong as far as what would be most useful. But I have already been using this framework in my own work and I've occasionally needed to represent ratios like that unambiguously. I completely agree that the case you found that started this issue is surprising and dangerous.

That said, such dimensionless ratios are probably not the most common use case, and they can also be restored if really needed. For example:

In [46]: q = 0.5 * u.MeV / u.J

In [47]: # If float() returns decomposed_unit by default for dimensionless quantities:

In [48]: f = float(q)

In [49]: u.Quantity(f, unit=u.dimensionless).to(u.MeV / u.J)
Out[49]: <Quantity 0.5 MeV / (J)>

Which also leads to the question: What's the difference between value_in and to? I agree that either way that's the safest bet but I don't see what the difference is.

But again, I don't know what the best use case is.

@astrofrog
Copy link
Member Author

@iguananaut - the problem is that in a lot of cases, the units are not going to be as simple as u.MeV / u.J. What people are likely to do is import some constants which they don't even really know the units of, and multiply those by some quantities, and at that point the units become a lot more complicated, so doing float() or np.exp or whatever is going to produce unintuitive results.

Since this is controversial, I wonder whether this means that we should, for 0.2, remove some of these features until we agree on them? We could remove the automatic float conversion altogether, and the .value attribute, and switch to having .value_in(...) be the only way to extract the value.

@mdboom - in my mind, value_in_cgs (or SI) would be unambiguous, as it would mean the base units of SI/cgs (but I agree this may not be what everyone would think).

@embray
Copy link
Member

embray commented Jan 25, 2013

I get the .value_in thing, but that still doesn't answer the question of what the difference is between .value_in and .to.

And that raises the question of, if quantities don't have a default value in any units (and really they do--it's whatever value/unit was passed in when the quantity was initialized) how do you represent what quantity the Quantity object represents? Like, when one does a __repr__? Or, once one defines a Quantity is it basically valueless until another unit is supplied? That seems onerous to the point of unusability because I think most people are going to be working in some known units most of the time. And I don't think that's what you're suggesting--I just don't know how you would propose to resolve that ambiguity.

@astrofrog
Copy link
Member Author

@iguananaut the difference is the following: to converts to another quantity object, and value_in would return a floating-point value of the quantity in those units.

From an idealistic point of view (not what I am necessarily advocating), having value_in as the only main way to get a value out would be the thing to do. __repr__ can just show the units in which the quantity was defined - after all it doesn't matter, since __repr__ isn't used for any mathematical operation.

But in the end, I am fine with pragmatic, hence my proposed 'middle-ground' solution of just keeping .value, and then one can always encourage users to convert using to(u.m/u.s).value which means the same as value_in(u.m/u.s). Converting to float would be fine if it was just a matter of explicitly calling float(), but it's not - I'm worried about the cases in which the implicit conversion to float will make things wrong (hence why I suggest disabling it for all but the dimensionless cases, where it actually makes some sense).

@embray
Copy link
Member

embray commented Jan 25, 2013

Also, I would argue that if one is combining pre-defined constants with other quantities and their not sure what units that constant is in, to always use .to either on the constant or on the end result. That, or maybe constants should have some special behaviour of automatically adapting the units of any quantity they're combined with, possibly reducing to dimensionless units if applicable.

@embray
Copy link
Member

embray commented Jan 25, 2013

I'd be fine with removing the automatic float() conversion and friends but only if it's in all cases. I don't think the dimensionless cases necessarily makes more sense than in a case where I know what units I'm using and have already taken care to keep them consistent. Better not to have behavior that works in one case but not another--that's surprising and too easy to lead to false assumptions.

@astrofrog
Copy link
Member Author

@iguananaut - I do agree that the dimensionless case only really makes more sense to me because that's what I ran into, so maybe that is indeed an argument for just removing implicit float conversion. It would be a nice feature, but the potential for confusion/error is large... I think we should think about this over the weekend and see what conclusion we reach by Monday. What I meant before was that maybe we should choose to be deliberately conservative for 0.2, and turn off automatic float conversion, even if it's a bit of a pain for users. However, I do think that a lot of use cases will actually not involve float conversion at all. What about (for 0.2):

  • remove automatic float conversion
  • implement value_in and make that the preferred method in the docs
  • keep value and mention it in the docs as a shortcut to getting quantity in current unit

?

@eteq
Copy link
Member

eteq commented Jan 26, 2013

@astrofrog @iguananaut - I would prefer not to have value_in(...), because it's exactly the same as .to(...).value. And there are important use cases for to independentent of .to(...).value.

That said, I think I agree with dropping the automatic float conversion for now (e.g. 0.2). It's really not that onerous to have to use value.

And I actually think the dimensionless case is a very important special case, because it's the only mathematically valid argument to exp, log, or as a power-law index. And given that those are pretty much the most commonly-used operations in astronomy, users will often be confused if it doesn't have the behavior we learned in all our classes... But I'm fine with punting a final judgement on that to 0.2.1 or 0.3.

@astrofrog
Copy link
Member Author

Is it possible to at least get np.sqrt working? (can __sqrt__ be defined and work?) That would already be useful. If we agree on removing automatic float conversion for now, I'll do this in this PR over the weekend. I also agree value_in is somewhat redundant with to(...).value.

Finally, I also agree on deferring a decision on exp, log, etc. until 0.2.1 or 0.3, as we already have a lot on our plates otherwise.

@adrn
Copy link
Member

adrn commented Jan 26, 2013

I'm with @eteq on this one: same number of characters in q.to(u.kpc).value vs. q.value_in(u.kpc) :)

So I'm for keeping q.to(u.kpc).value and not implementing value_in()

@embray
Copy link
Member

embray commented Jan 28, 2013

Still not convinced that telling users "just use .value" is any "safer". For <Quantity 937500000.0 nm / (m)> they'll still get 937500000.0 for .value if they were assuming it was just going to be a dimensionless ratio. If they want to be sure they should use .to(u.dimensionless) or .decomposed_value().

As for .value_in(), if that's implemented then to.(foo).value doesn't make sense because it means one can return a "value" for the original quantity in unspecified units. I still fall on the side of "well you'll get your value in whatever unit the quantity is in".

If we go this route Quantities need to be completely changed to not have an implicit unit at all. It would have an internal _unit to keep track of what units the quantity was originally specified in, but otherwise it doesn't have a unit unless you ask for a value in specific units.

But this is annoying because if I make a bunch of Quantities in 'm/s' and I know the only units I'm going to care about in my application are 'm/s' it's going to get really annoying really fast to have to write v.value_in('m/s') every time I just want the "$#$T^@" value. I realize though that this is less obvious if you're dealing with some quantities derived from a series of operations.

So to avoid having to feel responsible if someone crashes the spaceship, why not just produce a warning when getting an implicit value from an unreduced dimensionless unit?

@embray
Copy link
Member

embray commented Jan 28, 2013

One offer for a possible middle ground:

By default go with something like I described in my last comment, where a Quantity has no implicit value or unit (just an internal value and unit used to define it). To get a value for that quantity use .value_in(foo) (there would be no .to() which becomes redundant--that or we keep .to() but it returns a numerical value, not a new Quantity).

However, add a new optional default_unit boolean argument to the Quantity constructor. If default_unit=True then the unit used to define the Quantity is used as its default unit and features like __float__ will work implicitly. If safety is a concern we could set default_unit=False by default, while power users can always pass in True or even do something like Q = functools.partial(Quantity, default_unit=True) (maybe something like this can come built into the module). Alternatively, __float__ could still work even if default_unit=False, but it would cause a warning, at least (in fact I'd really rather have a warning than to have things just break unexpectedly due to the value of some flag).

The biggest question in my mind with this idea is how to propagate the default_unit option across multiple operations. I would propose just returning a logical conjunction of the default_unit for all Quantities involved in the expression.

@embray
Copy link
Member

embray commented Jan 31, 2013

Seems fine to change it if everyone was able to come to rapid agreement.

@astrofrog
Copy link
Member Author

Ok, I've merged #701 into this to make use of is_unity, which works as expected - thanks @mdboom!

@embray
Copy link
Member

embray commented Jan 31, 2013

So yeah, is this good to go at last?

@astrofrog
Copy link
Member Author

@iguananaut - no, #701 needs to be merged first (but once it is, then yes)

@embray
Copy link
Member

embray commented Jan 31, 2013

Then are we changing #701 to 0.2?

@astrofrog
Copy link
Member Author

@iguananaut - I think so, but #701 should now be trivial to finalize (there was a small issue with the docs that @mdboom needs to fix)

@astrofrog
Copy link
Member Author

@iguananaut @mdboom @eteq - I've removed #701 from this branch, and instead have defined my own _is_unity function that checks for strictly unity cases (e.g. not even mm * m / u.micron / u.m would pass). Depending on what is decided in #701, we can always replace it with a built-in method later.

@astrofrog
Copy link
Member Author

And while I agree that the test I am doing may be overly strict, it's only used to raise a warning, so the warning is always raised, except in a very special case where the unit is Unit(1), so I think that's fine (and we can relax it later if we think we can find other unambiguous cases).

@mdboom
Copy link
Contributor

mdboom commented Feb 1, 2013

As I mentioned in #701, I'm not crazy about the new definition of _is_unity here. I think mm * m / u.micron / u.m should pass. What's the rationale for that?

@astrofrog
Copy link
Member Author

@mdboom - hmm, so just to make sure I understand, you are saying that if the units are dimensionless, and the scale of the decomposed unit is 1, then we shouldn't have to raise a warning, because the units all cancel out and there is no scale? I guess I'm fine with that - the only reason I did it this way was to be on the absolute safe side - but if you don't foresee any issues, I'll change it. I'll still keep it separate from #701 since we don't know in what order they will be merged.

@mdboom
Copy link
Contributor

mdboom commented Feb 1, 2013

@astrofrog: Yes, exactly. And the new _is_unity looks right to me.

@astrofrog
Copy link
Member Author

@iguananaut @eteq - I'll merge this evening if there are no objections.

astrofrog added a commit that referenced this pull request Feb 2, 2013
Conceptual issue with float conversion when quantity is dimensionless
@astrofrog astrofrog merged commit 5e1dc19 into astropy:master Feb 2, 2013
@embray
Copy link
Member

embray commented Feb 6, 2013

How I feel now that this issue is closed:

Finally

@taldcroft
Copy link
Member

LOL

astrofrog added a commit that referenced this pull request Feb 6, 2013
Conceptual issue with float conversion when quantity is dimensionless
@mhvk
Copy link
Contributor

mhvk commented Jul 25, 2013

@astrofrog - think this is solved by #929 too!

keflavich pushed a commit to keflavich/astropy that referenced this pull request Oct 9, 2013
Conceptual issue with float conversion when quantity is dimensionless
@astrofrog astrofrog deleted the unit/fix-float-conversion branch July 5, 2016 18:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants