Home c# How to calculate new coordinates of the Sign of Vector?

How to calculate new coordinates of the Sign of Vector?

Author

Date

Category

The question is rather in mathematics, with which everything is very bad. There is an object located on coordinates (0.0.0). It rotates along the Y axis. It is required by pressing any key to move it to the distance specified in advance towards the view. Simply put there is a vector (length and angle relative to the axis). What formula can you get coordinates after moving?


Answer 1, Authority 100%

My answer will be relatively unity and its capabilities, mathematics will be at least, this is a more linear solution without a trajectory:

Take your gameObject to which you need to predict the position and make sure that the direction of the view – coincide the direction the blue axis in the local space .

To find out which space is now displayed or how to change it: in Unity , the editor can be changed World on local on the tool selection panel for manipulation objects, usually in the upper left corner of the editor.

When we know that the direction of view is forward , you can projected to predict the position.

using unityengine;
Public Class LinearPOSPrediction: Monobehaviour
{
  Private void Update ()
  {
    if (Input.anyKey)
    {
      TRANSFORM SometransformForm = Transform; // Transform Your Object
      var distance = 10f; // Distance which you know for prediction
      Var Prediction = Sometransform.Forward; // Get a view direction (linear)
      PREDICTION * = Distance; // Make the length of the vector more
      Prediction = sometransform.localposition + patilation; // Here is the actual linear prediction.
      sometransform.localposition = Prediction; // Move the object in the predigue position (linear)
    }
  }
}

Simply put there is a vector (length and angle relative to the axis). For what
Formula Can you get coordinates after moving?

prediction * = distance; // Make the length of the vector more
Prediction = sometransform.localposition + patilation; // Here is the actual linear prediction.

Similarly, if the direction of view is your camera, then probably the camera looks around the z axis – this is necessary for us, then the code is higher for both for the camera.

But if you want to move another object regarding how your camera looks, then you need to deploy managed object in the direction of the camera, then apply the predocated positions in local space . You can apply moving without rotating a managed object But, moving must be in world space.

someetransform.position = Prediction;

If the position is predicting the 3D landscape, then this is not the solution that you need. Since you probably would like to check the availability of walls or so that you do not fall under the landscape, etc.

If you are trying to implement the trajectory or like her, that is, lessons, how to implement it easier, more beautiful, expandable. a trajectory lesson

A wonderful answer, thank you, there is still a small question. Is it possible to
Using this code, moving is not the object sometransform, but
another? That is, there is a cube, he is spinning, we get the direction of it
look, set the distance, and by pressing the button, we do not move it
Suppose a cylinder under it, or any other, no matter. Just in this
the code does not declare any public objects, and we hide the script to
cube, so how to introduce here some other object I do not understand
thank you in advance. – suddendumb

I do not know how to answer the comment on this extending the answer ..

using unityengine; 
Public Class LinearPOSPrediction: Monobehaviour
{
  Public Float PredictDistance = 10F; // assign in the inspector or transmit value from another code by default 10
  Public Transform Subobject = NULL; // We assign in the inspector or transmit a link from another code, there is no default.
  Private transform _selftransform = null; // caching a link to your own transform, you can get rid if it does not induce ..
  Public Void SubobjectTranslate (Transform Subobject, Vector3 Position) // You can move from the other script, you can also locally.
  {
    if (subobject == NULL)
    {
      Debug.logerror ($ {this}: There is no reference to the subjection !!! ");
      Return;
    }
    subobject.localposition = POSITION; // Move the object to position (linear)
  }
  Public Vector3 FORWARDPREDICTION (Float Distance) // Pre-coming position, only compactly recorded.
  {
    if (_SELFTransformForm == NULL)
    {
      _SELFTransform = transform;
    }
    return _selftransform.localposition + (_SELFTRANSFORM.Forward * Distance);
  }
  Private void start () // Catch a link to your own transform, and if the START call was not, the link will be established elsewhere.
  {
    _SELFTransform = transform;
  }
  Private void Update ()
  {
    if (Input.anyKey)
    {
      VAR PREDICTION = FORWARDPREDICTION (PREDICTDISTANCE); // Get a predigue position
      SubobjectTranslate (Subobject, Prediction); // Move the subrange to a new position
    }
  }
}

Answer 2, Authority 100%

I will try to give a deployed more complete answer. And there as it turns out.

Suppose you have a two-dimensional image of the object and you want to rotate it in space.

Four angle of image:

ul = (0,0)
UR = (NS, 0)
Br = (NS, NL)
BL = (0, NL)

where:

  • NS – Width
  • NL – Height

Now consider the picture, as if it were on a vertical rectangle with a center and direction parallel to the plane Y = 0.

Let the coordinate units be pixels. Then four angular dots will become points:

ul = (x, y, z) = (-ns2, 0, nl2)
Ur = (x, y, z) = (ns2, 0, nl2)
Br = (x, y, z) = (ns2, 0, -nl2)
BL = (X, Y, Z) = (-NS2, 0, -NL2)

where

  • NS2 = (NS - 1) / 2 Determines the coordinate of the horizontal center
  • nl2 = (nl - 1) / 2 Determines the coordinate of the vertical center

Consider a promising chamber as located at a distance of f = focal length (in units of equivalent pixels) by image, looking directly to the center of the image. At the same time yc = -f , where f is determined by Fov (field of view), a certain diagonal image size.

tan (fov / 2) = sqrt (ns ^ 2 + nl ^ 2) / (2 * f)

or

f = ns / (2 * tan (fov / 2))

where fov = equivalent fov for a 35-millimeter picture frame with a size of 36 x 24 mm. Thus,

fov = 180 * atan (36/24) / pi ≈ 56 degrees

then turn the corner points of the rectangular image to three angle Pan , tilt and roll using the combined rotational matrix R .

Let 3 turning angle be defined in the following order:

  • Pan = Rule of Positive Rotation of points around the axis z.
  • Tilt = Rule negative rotation of points around the x axis.
  • roll = right positive rotation of points around the Y axis.

Then a combined rotation matrix is ​​formed:

R00 = (CROLL * CPAN) + (SROLL * STILT * SPAN) 
R01 = (CROLL * SPAN) - (SROLL * STILT * CPAN)
R02 = (SROLL * CTILT)
R10 = - (Ctilt * Span)
R11 = (CTILT * CPAN)
R12 = (Stilt)
R20 = - (SROLL * CPAN) + (CROLL * STILT * SPAN)
R21 = - (SROLL * SPAN) - (CROLL * STILT * CPAN)
R22 = (CROLL * CTILT)

where the consoles s and c mean SIN () and COS ()

Then proact rotary points on camera in perspective.

Since xc = zc = 0 , promising equations become:

xp / f = (x - xc) / (y - yc) = & gt; Xp = f * x / (y + f)
Zp / f = (z - zc) / (y - yc) = & gt; Zp = f * z / (y + f)

But we need to convert from xp in s (sample) and zp in l So that S, L were in the upper left corner, and not in the center, and L increased down.

xp = - ns / 2 + s
ZP = NL / 2 - L

where:

  • ns = number of samples (width)
  • nl = number of lines (height)

Count.

ss1 = 0
SL1 = 0.
SS2 = NS - 1
SL2 = 0.
SS3 = NS - 1
SL3 = NL - 1
SS4 = 0.
SL4 = NL - 1
NS2 = (NS - 1) / 2
NL2 = (NL - 1) / 2
WS1 = NS2.
WL1 = NL2.
WS2 = NS2.
WL2 = NL2.
WS3 = NS2.
WL3 = - NL2
WS4 = - NS2
WL4 = - NL2
X1 = (WS1 * R11) + (WL1 * R13)
Y1 = (WS1 * R21) + (WL1 * R23)
Z1 = (WS1 * R31) + (WL1 * R33)
X2 = (WS2 * R11) + (WL2 * R13)
Y2 = (WS2 * R21) + (WL2 * R23)
Z2 = (WS2 * R31) + (WL2 * R33)
X3 = (WS3 * R11) + (WL3 * R13)
Y3 = (WS3 * R21) + (WL3 * R23)
Z3 = (WS3 * R31) + (WL3 * R33)
X4 = (WS4 * R11) + (WL4 * R13)
Y4 = (WS4 * R21) + (WL4 * R23)
Z4 = (WS4 * R31) + (WL4 * R33)

Thus, promising equations become:

s = ((f * x) / (y + f)) + ns2
L = nl2 - ((f * z) / (y + f))
S1 = ((F * x1) / (Y1 + F)) + NS2
L1 = nl2 - ((f * z1) / (y1 + f))
S2 = ((F * x2) / (Y2 + F)) + NS2
L2 = nl2 - ((f * z2) / (y2 + f))
S3 = ((F * x3) / (Y3 + F)) + NS2
L3 = nl2 - ((f * z3) / (y3 + f))
S4 = ((F * x4) / (Y4 + F)) + NS2
L4 = NL2 - ((F * Z4) / (Y4 + F))

Now promote the four rotated angular points according to these equations.

SMAX = MAX (S1, S2, S3, S4)
SMIN = MIN (S1, S2, S3, S4)
LMAX = MAX (L1, L2, L3, L4)
Lmin = min (L1, L2, L3, L4)
DELS = SMAX - SMIN + 1
Dell = LMAX - Lmin + 1
If (Dels & GT; Dell) {
  Del = Dels.
  OFSS = 0.
  OFSL = (NL - (Dell * NS / $ DELS)) / 2
} else {
  Del = Dell.
  OFSL = 0.
  OFSS = (NS - (DELS * NL / $ Dell)) / 2
}
DS1 = OFSS + ((S1 - SMIN) * NS / DEL)
DL1 = OFSL + ((L1 - LMIN) * NL / DEL)
DS2 = OFSS + ((S2 - SMIN) * NS / DEL)
DL2 = OFSL + ((L2 - LMIN) * NL / DEL)
DS3 = OFSS + ((S3 - SMIN) * NS / DEL)
DL3 = OFSL + ((L3 - LMIN) * NL / DEL)
DS4 = OFSS + ((S4 - SMIN) * NS / DEL)
DL4 = OFSL + ((L4 - LMIN) * NL / DEL)

Finally, they received four points projected with four starting point images and now we can use these coordinates in the projection of perspective distortion.

[ss1, sl1] = & gt; [DS1, DL1]
[SS2, SL2] = & GT; [DS2, DL2]
[SS3, SL3] = & GT; [DS3, DL3]
[SS4, SL4] = & GT; [DS4, DL4]

Original image

pan = 45˚

tilt = 45˚

roll = 45˚

pan = 45˚, tilt = 45˚

Programmers, Start Your Engines!

Why spend time searching for the correct question and then entering your answer when you can find it in a second? That's what CompuTicket is all about! Here you'll find thousands of questions and answers from hundreds of computer languages.

Recent questions