# Prime Numbers, Involutions, and P ?=? NP: Some Thoughts

Let X be a subset of members from P union NP.

We know for each element x in X, x belongs to P or x belongs to NP. There exists an operator | – | such that -: X -> C such that every y in C is complex-valued, i.e. it may be represented by a + bi where a and b are integers less than or equal to n.

For each x, we may assign it a complex value in C.

Via Fourier Analysis, this Z x Z may be mapped analytically via a time-invariant function (representative of a Nash equilibrium as time passes; i.e., as time passes each “tick” of time as expressed by Fourier Analysis is representative of infinite convergence of two independent frequency waves ). This Nash equilibrium is exactly all expressible problems in classes P and NP (of which problems of these classes expressible in a formal language system are subclasses).

Therefore, since we are working within a formal language system, there are assumed to be a finite number of expressible sentences that are communicable to other individuals. After all, a brain of a human is not perfect.

Via a time-invariant involution, we may separate problems within X as a subset of P union NP as consisting of a finite number of expressible sentences with an input (singly real-valued, expressible as a “hash” of one’s spoken words) and the output encoded (singly imaginary-valued, expressible as a “hash” of one’s spoken words), with the entire “problem” handled by the “encoded” complex number C.

Therefore, P ?=? NP is not a problem in the sense that is is solvable. I propose the Church-Turing hypothesis as a basis for this following argument:

If we assume we may use a typed lambda calculus structure via “anonymous functions”, this theory seems to fit rather perfectly. Every potential input can be encoded in a hash (expressible perhaps as some sort of Unicode in binary) as well as every output (also expressible perhaps as sone sort of Unicode). This hash, since expressible using 0s and 1s, may be mapped as f: {0, 1}^n x {0, 1}^n -> Z x Z which may in turn be mapped to an arbitrary complex number. We may construct within this formal language system an invariant structure: one that represents the class of problems, that is whether a problem is in P or in NP. Specifically, problems in P run in less than exponential time (on average as number of trials for a solution goes to infinity; ideally in linear * logarithmic time) whereas problems in NP run in exponential time only (this makes sense because prime numbers are known to be distributed logarithmically as n increases — we are mapping via hashes “problems” onto a matrix representation of a complex number). This also means, if the universe is a computer simulation, it is equivalent to saying our formal language system is equivalent to computer communication since our formal language system theoretically could be modeled within a computer.

Interestingly, these matrix operations over time if assumed to be acted on randomly will, over an infinite amount of time, reach a nash equilibrium of steady-state conditions if we assume the 2^nth derivative is decreasing exponentially (as hardware systems get exponentially cheaper for exponentially more computing power) but the first derivative is always positive and increasing logarithmically (which implies that, over time, nash equilibrium will be reached within any local system if we assume the heat equation is true within the realm of all observable systems) because we may define such an involution that both splits up the problem classes P (solvable linearly) and NP (solvable at best n ln n assuming the problem is exponential in nature best solvable by binary search).

It seems clear, then, P and NP are two separate classes to describe related problems: because a problem is “checkable” in NP with respect to P if it may be checked in polynomial time O(x^n). Then, via binary search functionality within the formal language system describing the problem, the problem in NP may be optimally solved in time O(x^n * ln x), where ln x describes the fact it takes logarithmically more difficult to solve with respect to time as the checks increase which, due to advances in computing technology, become easier to solve as time passes and hardware capabilities increase.

It is only natural. With time, we will be able to solve all problems in NP as far as our computing power allows us to.

Additionally, we may use the constructs of Kappa Calculus to model all of this.

We are currently working on typing more material than this up into a formal document for all to understand and learn.

What the bloody hell are you talking about here?