Denial cryptography based on graph theory

Cryptography – Particular algorithmic function encoding

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C380S277000, C713S176000

Reexamination Certificate

active

06823068

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates generally to encryption, and more particularly, to an encryption system in which the plaintext and the raw cipher are different lengths and to a denial featured cryptography. Additional applications include pattern recognition, and other situations in which one modifies the inferential visibility of data.
BACKGROUND OF THE INVENTION
Cryptographic systems have evolved along deeply seated “grooves”: idiosyncracies. Mainly:
1. To express messages with a simple alphabet.
2. To render a message hard to read by creating a message form (cipher) which is expressed with the same or similar alphabet as the original message, and of the same size, or of fixed ratio vs. the original message.
Human languages are expressed with an alphabet which for most languages is limited to two to three dozen symbols. Cryptographers have embraced this paradigm, and thereby limited their process to ways by which a certain sequence of letters can be written in a different sequence (usually of the same alphabet), in a way that would confuse the unintended readers, but will allow the intended readers to use a reverse process (decryption) to uncover the original message which is assumed to be plain and readily understood.
Thus, the profound emotional expression of love can be expressed in English with its 26 letters as a statement:
I LOVE LUCY
which is readable to all English readers, (making it difficult to comprehend for people not conversant with English—alas, that is not an aspect of formal cryptography, as defined above). To establish discrimination between those designated as intended readers, and the rest of the English speaking public, the same alphabet is typically used (same 26 letters), but an encryption process would transform the message to, say:
JKOCXNGHL
The process that leads from the original message (as it reads before the formal encryption takes it on), to the cipher has also fallen into a deep groove of conservatism. It is carried out in a mathematical process that requires another input, called “key” or encryption key, Ke. And the respective idiosyncratic maxim says:
3. Ke should be as small as possible.
The intended reader, so the paradigm premise says, has his or her own key, a reading key or decryption key Kd which together with the cipher serves as an input to a decryption algorithm that uncovers the original message M. Kd is often the same as Ke (Kd=Ke) but not necessarily so. At any rate, Kd also submits to the smallness maxim:
4. Kd should be as small as possible.
The published consensus of the profession has also subscribed to:
5. Kerckhoff Law: which states that a good cryptographic system is one in which everything is fully exposed except the very identity (not the format) of the decryption key Kd, which too is expected to be a selection among a finite well known possibilities.
The term “published consensus” warrants some elaboration. Cryptography is unique in as much as its maximum benefit is achieved when its achievements are left undisclosed. Furthermore, a would be cryptanalyst (code-breaker)—an unintended reader in our terminology—has a lot to gain by convincing cryptographic message writers that he or she can not read ciphers constructed with a certain encryption algorithm, which in fact the code breaker can “break”. If the message writer believes it, he or she would aggregate the important secrets into that cipher-paradigm, thereby helping the cryptanalyst. The latter will not only be able to read the sensitive secrets of the message writers, he or she would also enjoy a distinct selection between what is sensitive and secret, and what is not. This is because the gullible message writer is likely to point to his or her secrets by the very fact that he or she would encrypt them. It is an irony that in such cases, it is better not to encrypt anything, and thereby achieve some protection by “drowning” the secrets within reams of innocuous information. For these reasons there emerged a big gap between what is officially said, and published on the matter, and what is actually happening in the clandestine ditches where the battle for timely knowledge rages with great zeal and some unbecoming side effects. Therefore, unlike the case with other fields of science, one should be rather apprehensive in regarding the “published consensus”.
One enlightened way to review the previous art is to use the historic time-line. We may discern several distinct eras:
1. Antiquity up to WW-I.
2. WW-II encryption.
3. Electronic Computing Era.
4. The era of the information superhighway. (Internet).
Antiquity up to WW-I
Up to WW-I formal encryption was based on changing messages written in Latin or prevailing alphabet by creating a message of equal size (in most cases), with the same alphabet. The changes were of two types: transposition and substitution: changing the order of the letters, or replacing each letter with another. The result looked so confusing that only determined mathematicians even tried to break those ciphers. Yet, for those mathematicians it was usually a matter of patience.
In most cases in this era the substitution process was fixed per letter; that is if the letter g was substituted by k in one appearance, it was substituted by k for all other appearances. This type is named monoalphabetic substitution. The term is a bit misleading. The ‘mono’ attributes suggests that for each substituted letter the substituting letter is always the same. The ‘alphabetic’ attribute suggests that the encryption happens through fiddling with alphabet.
Monoalphabetic substitution encryption has a gripping charm, perhaps because on one hand it appears so unsolvable, and on the other hand it just about always yields to patient amateur attacks. The fact is that even today when monoalphabetic substitution is obsolete for any serious business, it is live and well in the world of entertainment, and a large variety thereof is found in form of puzzles, riddles in most respected dailies and magazines.
That charm of simple alphabetic substitution sank this mode into the consciousness of the craft, and determined its further development for centuries. Encryption as it developed remained locked into this basic premise; adding, over the years, two modes of complexity (identified herewith, discussed below)
1. Homophonic substitution
2. Polyalphabetic substitution
The object of those complexities was to throw as many obstacles as possible on the path of understanding against the unintended readers.
A paradigm developed. The writer puts his message into ordinarily language writing, using the common alphabet (26 letters in English). That writing is called the plaintext; suggesting it is plainly understood. The encryption was limited to changing the plaintext to a message which was expressed with the same alphabet (26 letters in English) but its appearance was different enough from the plaintext that the latter would not be easily discovered. This hard to understand message form was called ciphertext, or simply: cipher.
The homophonic complexity, (not a very telling name), was comprised of mapping a single letter into two or more letters. Instead of mapping j to y, one mapped j to uyr. This tripled the size of the message in its cipher form, but kept the ratio between the plaintext message (before the encryption) and the cipher (after the encryption) fixed, which means that once the method was identified, the cipher length betrayed the plaintext length.
The polyalphabetic variety, was a one-to-many option in terms of replacing the same plaintext letter with a different (or same) letter each time. That is, k would become p on one appearance, c on another, n on the third etc. This variety turned out as the most serious avenue for encryption development for years to come. The big question was how to build such a mapping variety. In the monoalphabetic case one needed only a simple table that would match a plaintext letter with a cipher letter. But if a can be b on one occasion, c on another, d, e, f . . . including a—on different occasions then clearly there

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Denial cryptography based on graph theory does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Denial cryptography based on graph theory, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Denial cryptography based on graph theory will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3296043

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.