Your picture could also be a compound function of a parabola, and then a square root function
"And then a"? That's not a mathematical operation I've ever heard of. If you mean the sum of a square root and a quadratic, it's definitely not that (the pictured function has two flat spots, the sum of a square root and a quadratic can only ever have one flat spot at most).
If, instead, you mean that the pictured function could be a piecewise square root and a quadratic, then no, it still couldn't be that. The square root function isn't capable of producing any flat spots.
If you meant that the pictured function could be the product of a square root function and a quadratic, then I can say that it could sort of maybe be that. You can generate a similar shape, but it'll never be quite right.
Now, since we're talking about ease curves, this actually looks pretty similar to the BackEase curve that shows up in WPF. The basis function for that is:
For some constant a
. Now, the specific curve we're given above is actually the reverse of this and would be generated by the following:
For the curve in the OP, you'd set a to about 0.4 to get the same shape.
I should also mention that you can get very
close to the same shape with a cubic polynomial. Just for fun, let's figure out how we can get the best equivalent cubic approximation. First, let's narrow down the cubic. We can notice that our function, f, has a few special properties:
f(0) = 0
f(1) = 1
f'(0) = 0 (it's flat at t = 0; remember I'm working with the flipped version of the function).
Using these equations, we can find all the cubics that have these same properties. It turns out they're all of the form:
Now, to the fun part. We want to minimize the difference between g(t) and f(t) over the interval [0, 1]. One way of measuring how "different" the two functions are is with the mean square error:
Now, what we want to do is, for a given value of a
, find the value of b
that minimizes that error. That gives us the following equation:
By the Leibniz integral rule, we can move that derivative inside the integral, which helps simplify some of the algebra. However, the algebra's still pretty tedious, so I'll spare you that and jump straight to the result. After we crank out all the algebra, we can solve for b
Anyway, there are other metrics you could choose to minimize. For example, you could try to minimize the maximum difference between the two functions. That's actually considerably more difficult in this case even though it doesn't involve any integration!