这个向量的所有分量就是这些偏导。
故我们是把位置向量,分解为简单向量的和。
So, we want to decompose the position vector into a sum of simpler vectors.
向量的类型取决于所提升的值的类型。
它只简单地打印argv向量的内容。
这将实现在数据字段中创建每个字母向量的策略。
This will fulfill our strategy for creating a vector on each letter in the data field.
这是平面法向量,和沿直线向量的点积。
It's the dot product between the normal vector of a plane and the vector along the line.
为了求出高度,需要知道这向量的垂直分量是多少。
And to find the height of this thing, I need to know what actually the normal component of this vector is.
使用MPI进行并行编程来实现矩阵与向量的乘法。
Parallel programming using MPI to multiply a matrix by a vector.
我们还学过加速度,也就是速度向量的导数。
And, we've also learned about acceleration, which is the derivative of velocity.
现在,这是一个由三个向量的线性组合,它们碰巧…
It's a linear combination now of 3 vectors and they happened to be...
举例来说,spu_insert不会修改向量的内容。
For instance, spu_insert does not modify the contents of the vector.
看到了吧,系数变为0了,当直线垂直于法向量的时候。
So, see, this coefficient becomes zero exactly when the line is perpendicular to the normal vector.
为了找到带法向量的面元,我就取这两个的叉积。
To find the surface element together with a normal vector, I would just take the cross-product between these guys.
特别地,能得到α角的余弦值,我们知道如何找两向量的夹角。
In particular,cosine of alpha,I can get, well, we know how to find the angle between two vectors.
这个值可以为负,从而在Face对象的法向量的反方向上挤压。
This value can be negative to extrude in the direction opposite the Face object's normal vector.
梯度向量的垂直方向,为什么是这样而不是另外一个方向?
Does the gradient vector, why is the gradient vector perpendicular in one direction rather than the other?
你知道……不是向量的长度决定……是你用多大的力决定。
You know.. it’s not the length of the vector that counts… it’s how you apply the force
你知道……不是向量的长度决定……是你用多大的力决定。
You know.. it's not the length of the vector that counts… it's how you apply the force
这门课我们首先学习的概念是向量,以及怎样做向量的内积。
So, the first things that we learned about in this class were vectors, and how to do dot-product of vectors.
结合基于LLR的ngram生成算法并行处理文本文档到向量的转换
Parallel text document to vector conversion using LLR based ngram generation
我们要做的是,沿着曲线的每一点上,取向量场和法向量的点积。
What we will do is just, at every point along the curve, the dot product between the vector field and the normal vector.
最后,还要释放分配给向量的内存并把NULL向量返回给调用者。
Lastly, the memory allocated to the vector is also freed and NULL vector is returned to the caller.
函数matrix 、array和dim是用于设置向量的维的简单函数。
The functions matrix , array , and dim are simply convenience functions for setting the dimensions of a vector.
然而,问题是向量处理会假设每条指令都可以适用于向量的所有元素。
However, the problem is that vector processing assumes that each and every instruction will be applied to all elements of the vector.
我们是知道如何去求水平集的法向量的,也就是垂直于水平集的法向量。
And we know how to find a normal vector to the level set, namely the gradient vector is always perpendicular to the level set.
它们是同一个向量,通常我们都选原点作为向量的起点,但是我们不一定要这么做。
It's the same vector So, a lot of vectors we'll draw starting at the origin but we don't have to.
例如,R 2的平方、二维向量的长度、三角不等式等都存在勾股定理。
For instance, it's also the square of the Euclidean norm on R2, the length of a two-dimensional vector, a part of the triangle inequality, and quite a bit more.
本文中的代码实际上是非向量的(即标量)代码,这意味着一次只处理一个值。
The code in this article is actually non-vector (also known as scalar) code, meaning that it only works with one value at a time.
所以我们看到我,把向量的大小翻倍了,但是我调用的次数,不仅仅是翻倍了。
So we see that I've doubled the size of the vector but I've much more than doubled the number of calls.
所以我们看到我,把向量的大小翻倍了,但是我调用的次数,不仅仅是翻倍了。
So we see that I've doubled the size of the vector but I've much more than doubled the number of calls.
应用推荐