Quantcast
Channel: October 2013 – Bloomsbury Journal
Viewing all articles
Browse latest Browse all 3

Linear independence of polynomials

$
0
0

One of the exercises this week asked for a proof of linear independence for the set

\{x^i\}_{i\in {\bf N}}

inside the polynomials R[x] with real coefficients. However, note that the polynomials here are regarded as *functions* from R to R. Thus, it amounts to showing that if

c_0+c_1x+\cdots c_nx^n=0

as a function, then all c_i have to be zero. This does require proof. One quick way to do this is to note that all polynomial functions are differentiable. And if

f(x)=c_0+c_1x+\cdots c_nx^n

is the zero function, then so are all its derivatives. In particular,

f^{(i)}(0)=0

for all i. But f^{(i)}(0)=i!c_i. Thus, c_i=0 for all i.

One possible reason for confusion is that there is another ‘formal’ definition of R[x] by simply identifying a polynomial with its sequence of coefficients. That is, you can think of an element of R[x] as a function f:N \rightarrow R that has *finite support* in that f(i)=0 for all but finitely many i. With this definition, the polynomial x^i becomes identified with the function e_i that sends i to 1 and everything else to zero. If you take this approach, the linear independence also becomes formal. But in this problem, you are defining R[x] as a function in its variable. This of course is the natural definition you’ve been familiar with at least since secondary school.

Here are two questions:

1. If you think of two polynomials f and g as functions from N to R with finite support, what is a nice way to write the product fg?

2. What is the advantage of this formal definition?


Viewing all articles
Browse latest Browse all 3

Trending Articles