diff --git a/ESN/EchoTorch-master/LICENSE b/ESN/EchoTorch-master/LICENSE
new file mode 100644
index 0000000..9cecc1d
--- /dev/null
+++ b/ESN/EchoTorch-master/LICENSE
@@ -0,0 +1,674 @@
+ GNU GENERAL PUBLIC LICENSE
+ Version 3, 29 June 2007
+
+ Copyright (C) 2007 Free Software Foundation, Inc.
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+ Preamble
+
+ The GNU General Public License is a free, copyleft license for
+software and other kinds of works.
+
+ The licenses for most software and other practical works are designed
+to take away your freedom to share and change the works. By contrast,
+the GNU General Public License is intended to guarantee your freedom to
+share and change all versions of a program--to make sure it remains free
+software for all its users. We, the Free Software Foundation, use the
+GNU General Public License for most of our software; it applies also to
+any other work released this way by its authors. You can apply it to
+your programs, too.
+
+ When we speak of free software, we are referring to freedom, not
+price. Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+them if you wish), that you receive source code or can get it if you
+want it, that you can change the software or use pieces of it in new
+free programs, and that you know you can do these things.
+
+ To protect your rights, we need to prevent others from denying you
+these rights or asking you to surrender the rights. Therefore, you have
+certain responsibilities if you distribute copies of the software, or if
+you modify it: responsibilities to respect the freedom of others.
+
+ For example, if you distribute copies of such a program, whether
+gratis or for a fee, you must pass on to the recipients the same
+freedoms that you received. You must make sure that they, too, receive
+or can get the source code. And you must show them these terms so they
+know their rights.
+
+ Developers that use the GNU GPL protect your rights with two steps:
+(1) assert copyright on the software, and (2) offer you this License
+giving you legal permission to copy, distribute and/or modify it.
+
+ For the developers' and authors' protection, the GPL clearly explains
+that there is no warranty for this free software. For both users' and
+authors' sake, the GPL requires that modified versions be marked as
+changed, so that their problems will not be attributed erroneously to
+authors of previous versions.
+
+ Some devices are designed to deny users access to install or run
+modified versions of the software inside them, although the manufacturer
+can do so. This is fundamentally incompatible with the aim of
+protecting users' freedom to change the software. The systematic
+pattern of such abuse occurs in the area of products for individuals to
+use, which is precisely where it is most unacceptable. Therefore, we
+have designed this version of the GPL to prohibit the practice for those
+products. If such problems arise substantially in other domains, we
+stand ready to extend this provision to those domains in future versions
+of the GPL, as needed to protect the freedom of users.
+
+ Finally, every program is threatened constantly by software patents.
+States should not allow patents to restrict development and use of
+software on general-purpose computers, but in those that do, we wish to
+avoid the special danger that patents applied to a free program could
+make it effectively proprietary. To prevent this, the GPL assures that
+patents cannot be used to render the program non-free.
+
+ The precise terms and conditions for copying, distribution and
+modification follow.
+
+ TERMS AND CONDITIONS
+
+ 0. Definitions.
+
+ "This License" refers to version 3 of the GNU General Public License.
+
+ "Copyright" also means copyright-like laws that apply to other kinds of
+works, such as semiconductor masks.
+
+ "The Program" refers to any copyrightable work licensed under this
+License. Each licensee is addressed as "you". "Licensees" and
+"recipients" may be individuals or organizations.
+
+ To "modify" a work means to copy from or adapt all or part of the work
+in a fashion requiring copyright permission, other than the making of an
+exact copy. The resulting work is called a "modified version" of the
+earlier work or a work "based on" the earlier work.
+
+ A "covered work" means either the unmodified Program or a work based
+on the Program.
+
+ To "propagate" a work means to do anything with it that, without
+permission, would make you directly or secondarily liable for
+infringement under applicable copyright law, except executing it on a
+computer or modifying a private copy. Propagation includes copying,
+distribution (with or without modification), making available to the
+public, and in some countries other activities as well.
+
+ To "convey" a work means any kind of propagation that enables other
+parties to make or receive copies. Mere interaction with a user through
+a computer network, with no transfer of a copy, is not conveying.
+
+ An interactive user interface displays "Appropriate Legal Notices"
+to the extent that it includes a convenient and prominently visible
+feature that (1) displays an appropriate copyright notice, and (2)
+tells the user that there is no warranty for the work (except to the
+extent that warranties are provided), that licensees may convey the
+work under this License, and how to view a copy of this License. If
+the interface presents a list of user commands or options, such as a
+menu, a prominent item in the list meets this criterion.
+
+ 1. Source Code.
+
+ The "source code" for a work means the preferred form of the work
+for making modifications to it. "Object code" means any non-source
+form of a work.
+
+ A "Standard Interface" means an interface that either is an official
+standard defined by a recognized standards body, or, in the case of
+interfaces specified for a particular programming language, one that
+is widely used among developers working in that language.
+
+ The "System Libraries" of an executable work include anything, other
+than the work as a whole, that (a) is included in the normal form of
+packaging a Major Component, but which is not part of that Major
+Component, and (b) serves only to enable use of the work with that
+Major Component, or to implement a Standard Interface for which an
+implementation is available to the public in source code form. A
+"Major Component", in this context, means a major essential component
+(kernel, window system, and so on) of the specific operating system
+(if any) on which the executable work runs, or a compiler used to
+produce the work, or an object code interpreter used to run it.
+
+ The "Corresponding Source" for a work in object code form means all
+the source code needed to generate, install, and (for an executable
+work) run the object code and to modify the work, including scripts to
+control those activities. However, it does not include the work's
+System Libraries, or general-purpose tools or generally available free
+programs which are used unmodified in performing those activities but
+which are not part of the work. For example, Corresponding Source
+includes interface definition files associated with source files for
+the work, and the source code for shared libraries and dynamically
+linked subprograms that the work is specifically designed to require,
+such as by intimate data communication or control flow between those
+subprograms and other parts of the work.
+
+ The Corresponding Source need not include anything that users
+can regenerate automatically from other parts of the Corresponding
+Source.
+
+ The Corresponding Source for a work in source code form is that
+same work.
+
+ 2. Basic Permissions.
+
+ All rights granted under this License are granted for the term of
+copyright on the Program, and are irrevocable provided the stated
+conditions are met. This License explicitly affirms your unlimited
+permission to run the unmodified Program. The output from running a
+covered work is covered by this License only if the output, given its
+content, constitutes a covered work. This License acknowledges your
+rights of fair use or other equivalent, as provided by copyright law.
+
+ You may make, run and propagate covered works that you do not
+convey, without conditions so long as your license otherwise remains
+in force. You may convey covered works to others for the sole purpose
+of having them make modifications exclusively for you, or provide you
+with facilities for running those works, provided that you comply with
+the terms of this License in conveying all material for which you do
+not control copyright. Those thus making or running the covered works
+for you must do so exclusively on your behalf, under your direction
+and control, on terms that prohibit them from making any copies of
+your copyrighted material outside their relationship with you.
+
+ Conveying under any other circumstances is permitted solely under
+the conditions stated below. Sublicensing is not allowed; section 10
+makes it unnecessary.
+
+ 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
+
+ No covered work shall be deemed part of an effective technological
+measure under any applicable law fulfilling obligations under article
+11 of the WIPO copyright treaty adopted on 20 December 1996, or
+similar laws prohibiting or restricting circumvention of such
+measures.
+
+ When you convey a covered work, you waive any legal power to forbid
+circumvention of technological measures to the extent such circumvention
+is effected by exercising rights under this License with respect to
+the covered work, and you disclaim any intention to limit operation or
+modification of the work as a means of enforcing, against the work's
+users, your or third parties' legal rights to forbid circumvention of
+technological measures.
+
+ 4. Conveying Verbatim Copies.
+
+ You may convey verbatim copies of the Program's source code as you
+receive it, in any medium, provided that you conspicuously and
+appropriately publish on each copy an appropriate copyright notice;
+keep intact all notices stating that this License and any
+non-permissive terms added in accord with section 7 apply to the code;
+keep intact all notices of the absence of any warranty; and give all
+recipients a copy of this License along with the Program.
+
+ You may charge any price or no price for each copy that you convey,
+and you may offer support or warranty protection for a fee.
+
+ 5. Conveying Modified Source Versions.
+
+ You may convey a work based on the Program, or the modifications to
+produce it from the Program, in the form of source code under the
+terms of section 4, provided that you also meet all of these conditions:
+
+ a) The work must carry prominent notices stating that you modified
+ it, and giving a relevant date.
+
+ b) The work must carry prominent notices stating that it is
+ released under this License and any conditions added under section
+ 7. This requirement modifies the requirement in section 4 to
+ "keep intact all notices".
+
+ c) You must license the entire work, as a whole, under this
+ License to anyone who comes into possession of a copy. This
+ License will therefore apply, along with any applicable section 7
+ additional terms, to the whole of the work, and all its parts,
+ regardless of how they are packaged. This License gives no
+ permission to license the work in any other way, but it does not
+ invalidate such permission if you have separately received it.
+
+ d) If the work has interactive user interfaces, each must display
+ Appropriate Legal Notices; however, if the Program has interactive
+ interfaces that do not display Appropriate Legal Notices, your
+ work need not make them do so.
+
+ A compilation of a covered work with other separate and independent
+works, which are not by their nature extensions of the covered work,
+and which are not combined with it such as to form a larger program,
+in or on a volume of a storage or distribution medium, is called an
+"aggregate" if the compilation and its resulting copyright are not
+used to limit the access or legal rights of the compilation's users
+beyond what the individual works permit. Inclusion of a covered work
+in an aggregate does not cause this License to apply to the other
+parts of the aggregate.
+
+ 6. Conveying Non-Source Forms.
+
+ You may convey a covered work in object code form under the terms
+of sections 4 and 5, provided that you also convey the
+machine-readable Corresponding Source under the terms of this License,
+in one of these ways:
+
+ a) Convey the object code in, or embodied in, a physical product
+ (including a physical distribution medium), accompanied by the
+ Corresponding Source fixed on a durable physical medium
+ customarily used for software interchange.
+
+ b) Convey the object code in, or embodied in, a physical product
+ (including a physical distribution medium), accompanied by a
+ written offer, valid for at least three years and valid for as
+ long as you offer spare parts or customer support for that product
+ model, to give anyone who possesses the object code either (1) a
+ copy of the Corresponding Source for all the software in the
+ product that is covered by this License, on a durable physical
+ medium customarily used for software interchange, for a price no
+ more than your reasonable cost of physically performing this
+ conveying of source, or (2) access to copy the
+ Corresponding Source from a network server at no charge.
+
+ c) Convey individual copies of the object code with a copy of the
+ written offer to provide the Corresponding Source. This
+ alternative is allowed only occasionally and noncommercially, and
+ only if you received the object code with such an offer, in accord
+ with subsection 6b.
+
+ d) Convey the object code by offering access from a designated
+ place (gratis or for a charge), and offer equivalent access to the
+ Corresponding Source in the same way through the same place at no
+ further charge. You need not require recipients to copy the
+ Corresponding Source along with the object code. If the place to
+ copy the object code is a network server, the Corresponding Source
+ may be on a different server (operated by you or a third party)
+ that supports equivalent copying facilities, provided you maintain
+ clear directions next to the object code saying where to find the
+ Corresponding Source. Regardless of what server hosts the
+ Corresponding Source, you remain obligated to ensure that it is
+ available for as long as needed to satisfy these requirements.
+
+ e) Convey the object code using peer-to-peer transmission, provided
+ you inform other peers where the object code and Corresponding
+ Source of the work are being offered to the general public at no
+ charge under subsection 6d.
+
+ A separable portion of the object code, whose source code is excluded
+from the Corresponding Source as a System Library, need not be
+included in conveying the object code work.
+
+ A "User Product" is either (1) a "consumer product", which means any
+tangible personal property which is normally used for personal, family,
+or household purposes, or (2) anything designed or sold for incorporation
+into a dwelling. In determining whether a product is a consumer product,
+doubtful cases shall be resolved in favor of coverage. For a particular
+product received by a particular user, "normally used" refers to a
+typical or common use of that class of product, regardless of the status
+of the particular user or of the way in which the particular user
+actually uses, or expects or is expected to use, the product. A product
+is a consumer product regardless of whether the product has substantial
+commercial, industrial or non-consumer uses, unless such uses represent
+the only significant mode of use of the product.
+
+ "Installation Information" for a User Product means any methods,
+procedures, authorization keys, or other information required to install
+and execute modified versions of a covered work in that User Product from
+a modified version of its Corresponding Source. The information must
+suffice to ensure that the continued functioning of the modified object
+code is in no case prevented or interfered with solely because
+modification has been made.
+
+ If you convey an object code work under this section in, or with, or
+specifically for use in, a User Product, and the conveying occurs as
+part of a transaction in which the right of possession and use of the
+User Product is transferred to the recipient in perpetuity or for a
+fixed term (regardless of how the transaction is characterized), the
+Corresponding Source conveyed under this section must be accompanied
+by the Installation Information. But this requirement does not apply
+if neither you nor any third party retains the ability to install
+modified object code on the User Product (for example, the work has
+been installed in ROM).
+
+ The requirement to provide Installation Information does not include a
+requirement to continue to provide support service, warranty, or updates
+for a work that has been modified or installed by the recipient, or for
+the User Product in which it has been modified or installed. Access to a
+network may be denied when the modification itself materially and
+adversely affects the operation of the network or violates the rules and
+protocols for communication across the network.
+
+ Corresponding Source conveyed, and Installation Information provided,
+in accord with this section must be in a format that is publicly
+documented (and with an implementation available to the public in
+source code form), and must require no special password or key for
+unpacking, reading or copying.
+
+ 7. Additional Terms.
+
+ "Additional permissions" are terms that supplement the terms of this
+License by making exceptions from one or more of its conditions.
+Additional permissions that are applicable to the entire Program shall
+be treated as though they were included in this License, to the extent
+that they are valid under applicable law. If additional permissions
+apply only to part of the Program, that part may be used separately
+under those permissions, but the entire Program remains governed by
+this License without regard to the additional permissions.
+
+ When you convey a copy of a covered work, you may at your option
+remove any additional permissions from that copy, or from any part of
+it. (Additional permissions may be written to require their own
+removal in certain cases when you modify the work.) You may place
+additional permissions on material, added by you to a covered work,
+for which you have or can give appropriate copyright permission.
+
+ Notwithstanding any other provision of this License, for material you
+add to a covered work, you may (if authorized by the copyright holders of
+that material) supplement the terms of this License with terms:
+
+ a) Disclaiming warranty or limiting liability differently from the
+ terms of sections 15 and 16 of this License; or
+
+ b) Requiring preservation of specified reasonable legal notices or
+ author attributions in that material or in the Appropriate Legal
+ Notices displayed by works containing it; or
+
+ c) Prohibiting misrepresentation of the origin of that material, or
+ requiring that modified versions of such material be marked in
+ reasonable ways as different from the original version; or
+
+ d) Limiting the use for publicity purposes of names of licensors or
+ authors of the material; or
+
+ e) Declining to grant rights under trademark law for use of some
+ trade names, trademarks, or service marks; or
+
+ f) Requiring indemnification of licensors and authors of that
+ material by anyone who conveys the material (or modified versions of
+ it) with contractual assumptions of liability to the recipient, for
+ any liability that these contractual assumptions directly impose on
+ those licensors and authors.
+
+ All other non-permissive additional terms are considered "further
+restrictions" within the meaning of section 10. If the Program as you
+received it, or any part of it, contains a notice stating that it is
+governed by this License along with a term that is a further
+restriction, you may remove that term. If a license document contains
+a further restriction but permits relicensing or conveying under this
+License, you may add to a covered work material governed by the terms
+of that license document, provided that the further restriction does
+not survive such relicensing or conveying.
+
+ If you add terms to a covered work in accord with this section, you
+must place, in the relevant source files, a statement of the
+additional terms that apply to those files, or a notice indicating
+where to find the applicable terms.
+
+ Additional terms, permissive or non-permissive, may be stated in the
+form of a separately written license, or stated as exceptions;
+the above requirements apply either way.
+
+ 8. Termination.
+
+ You may not propagate or modify a covered work except as expressly
+provided under this License. Any attempt otherwise to propagate or
+modify it is void, and will automatically terminate your rights under
+this License (including any patent licenses granted under the third
+paragraph of section 11).
+
+ However, if you cease all violation of this License, then your
+license from a particular copyright holder is reinstated (a)
+provisionally, unless and until the copyright holder explicitly and
+finally terminates your license, and (b) permanently, if the copyright
+holder fails to notify you of the violation by some reasonable means
+prior to 60 days after the cessation.
+
+ Moreover, your license from a particular copyright holder is
+reinstated permanently if the copyright holder notifies you of the
+violation by some reasonable means, this is the first time you have
+received notice of violation of this License (for any work) from that
+copyright holder, and you cure the violation prior to 30 days after
+your receipt of the notice.
+
+ Termination of your rights under this section does not terminate the
+licenses of parties who have received copies or rights from you under
+this License. If your rights have been terminated and not permanently
+reinstated, you do not qualify to receive new licenses for the same
+material under section 10.
+
+ 9. Acceptance Not Required for Having Copies.
+
+ You are not required to accept this License in order to receive or
+run a copy of the Program. Ancillary propagation of a covered work
+occurring solely as a consequence of using peer-to-peer transmission
+to receive a copy likewise does not require acceptance. However,
+nothing other than this License grants you permission to propagate or
+modify any covered work. These actions infringe copyright if you do
+not accept this License. Therefore, by modifying or propagating a
+covered work, you indicate your acceptance of this License to do so.
+
+ 10. Automatic Licensing of Downstream Recipients.
+
+ Each time you convey a covered work, the recipient automatically
+receives a license from the original licensors, to run, modify and
+propagate that work, subject to this License. You are not responsible
+for enforcing compliance by third parties with this License.
+
+ An "entity transaction" is a transaction transferring control of an
+organization, or substantially all assets of one, or subdividing an
+organization, or merging organizations. If propagation of a covered
+work results from an entity transaction, each party to that
+transaction who receives a copy of the work also receives whatever
+licenses to the work the party's predecessor in interest had or could
+give under the previous paragraph, plus a right to possession of the
+Corresponding Source of the work from the predecessor in interest, if
+the predecessor has it or can get it with reasonable efforts.
+
+ You may not impose any further restrictions on the exercise of the
+rights granted or affirmed under this License. For example, you may
+not impose a license fee, royalty, or other charge for exercise of
+rights granted under this License, and you may not initiate litigation
+(including a cross-claim or counterclaim in a lawsuit) alleging that
+any patent claim is infringed by making, using, selling, offering for
+sale, or importing the Program or any portion of it.
+
+ 11. Patents.
+
+ A "contributor" is a copyright holder who authorizes use under this
+License of the Program or a work on which the Program is based. The
+work thus licensed is called the contributor's "contributor version".
+
+ A contributor's "essential patent claims" are all patent claims
+owned or controlled by the contributor, whether already acquired or
+hereafter acquired, that would be infringed by some manner, permitted
+by this License, of making, using, or selling its contributor version,
+but do not include claims that would be infringed only as a
+consequence of further modification of the contributor version. For
+purposes of this definition, "control" includes the right to grant
+patent sublicenses in a manner consistent with the requirements of
+this License.
+
+ Each contributor grants you a non-exclusive, worldwide, royalty-free
+patent license under the contributor's essential patent claims, to
+make, use, sell, offer for sale, import and otherwise run, modify and
+propagate the contents of its contributor version.
+
+ In the following three paragraphs, a "patent license" is any express
+agreement or commitment, however denominated, not to enforce a patent
+(such as an express permission to practice a patent or covenant not to
+sue for patent infringement). To "grant" such a patent license to a
+party means to make such an agreement or commitment not to enforce a
+patent against the party.
+
+ If you convey a covered work, knowingly relying on a patent license,
+and the Corresponding Source of the work is not available for anyone
+to copy, free of charge and under the terms of this License, through a
+publicly available network server or other readily accessible means,
+then you must either (1) cause the Corresponding Source to be so
+available, or (2) arrange to deprive yourself of the benefit of the
+patent license for this particular work, or (3) arrange, in a manner
+consistent with the requirements of this License, to extend the patent
+license to downstream recipients. "Knowingly relying" means you have
+actual knowledge that, but for the patent license, your conveying the
+covered work in a country, or your recipient's use of the covered work
+in a country, would infringe one or more identifiable patents in that
+country that you have reason to believe are valid.
+
+ If, pursuant to or in connection with a single transaction or
+arrangement, you convey, or propagate by procuring conveyance of, a
+covered work, and grant a patent license to some of the parties
+receiving the covered work authorizing them to use, propagate, modify
+or convey a specific copy of the covered work, then the patent license
+you grant is automatically extended to all recipients of the covered
+work and works based on it.
+
+ A patent license is "discriminatory" if it does not include within
+the scope of its coverage, prohibits the exercise of, or is
+conditioned on the non-exercise of one or more of the rights that are
+specifically granted under this License. You may not convey a covered
+work if you are a party to an arrangement with a third party that is
+in the business of distributing software, under which you make payment
+to the third party based on the extent of your activity of conveying
+the work, and under which the third party grants, to any of the
+parties who would receive the covered work from you, a discriminatory
+patent license (a) in connection with copies of the covered work
+conveyed by you (or copies made from those copies), or (b) primarily
+for and in connection with specific products or compilations that
+contain the covered work, unless you entered into that arrangement,
+or that patent license was granted, prior to 28 March 2007.
+
+ Nothing in this License shall be construed as excluding or limiting
+any implied license or other defenses to infringement that may
+otherwise be available to you under applicable patent law.
+
+ 12. No Surrender of Others' Freedom.
+
+ If conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License. If you cannot convey a
+covered work so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you may
+not convey it at all. For example, if you agree to terms that obligate you
+to collect a royalty for further conveying from those to whom you convey
+the Program, the only way you could satisfy both those terms and this
+License would be to refrain entirely from conveying the Program.
+
+ 13. Use with the GNU Affero General Public License.
+
+ Notwithstanding any other provision of this License, you have
+permission to link or combine any covered work with a work licensed
+under version 3 of the GNU Affero General Public License into a single
+combined work, and to convey the resulting work. The terms of this
+License will continue to apply to the part which is the covered work,
+but the special requirements of the GNU Affero General Public License,
+section 13, concerning interaction through a network will apply to the
+combination as such.
+
+ 14. Revised Versions of this License.
+
+ The Free Software Foundation may publish revised and/or new versions of
+the GNU General Public License from time to time. Such new versions will
+be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+ Each version is given a distinguishing version number. If the
+Program specifies that a certain numbered version of the GNU General
+Public License "or any later version" applies to it, you have the
+option of following the terms and conditions either of that numbered
+version or of any later version published by the Free Software
+Foundation. If the Program does not specify a version number of the
+GNU General Public License, you may choose any version ever published
+by the Free Software Foundation.
+
+ If the Program specifies that a proxy can decide which future
+versions of the GNU General Public License can be used, that proxy's
+public statement of acceptance of a version permanently authorizes you
+to choose that version for the Program.
+
+ Later license versions may give you additional or different
+permissions. However, no additional obligations are imposed on any
+author or copyright holder as a result of your choosing to follow a
+later version.
+
+ 15. Disclaimer of Warranty.
+
+ THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
+APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
+HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
+OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
+THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
+IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
+ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
+
+ 16. Limitation of Liability.
+
+ IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
+THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
+GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
+USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
+DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
+PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
+EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
+SUCH DAMAGES.
+
+ 17. Interpretation of Sections 15 and 16.
+
+ If the disclaimer of warranty and limitation of liability provided
+above cannot be given local legal effect according to their terms,
+reviewing courts shall apply local law that most closely approximates
+an absolute waiver of all civil liability in connection with the
+Program, unless a warranty or assumption of liability accompanies a
+copy of the Program in return for a fee.
+
+ END OF TERMS AND CONDITIONS
+
+ How to Apply These Terms to Your New Programs
+
+ If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these terms.
+
+ To do so, attach the following notices to the program. It is safest
+to attach them to the start of each source file to most effectively
+state the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+ {one line to give the program's name and a brief idea of what it does.}
+ Copyright (C) {year} {name of author}
+
+ This program is free software: you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation, either version 3 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program. If not, see .
+
+Also add information on how to contact you by electronic and paper mail.
+
+ If the program does terminal interaction, make it output a short
+notice like this when it starts in an interactive mode:
+
+ {project} Copyright (C) {year} {fullname}
+ This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
+ This is free software, and you are welcome to redistribute it
+ under certain conditions; type `show c' for details.
+
+The hypothetical commands `show w' and `show c' should show the appropriate
+parts of the General Public License. Of course, your program's commands
+might be different; for a GUI interface, you would use an "about box".
+
+ You should also get your employer (if you work as a programmer) or school,
+if any, to sign a "copyright disclaimer" for the program, if necessary.
+For more information on this, and how to apply and follow the GNU GPL, see
+.
+
+ The GNU General Public License does not permit incorporating your program
+into proprietary programs. If your program is a subroutine library, you
+may consider it more useful to permit linking proprietary applications with
+the library. If this is what you want to do, use the GNU Lesser General
+Public License instead of this License. But first, please read
+.
diff --git a/ESN/EchoTorch-master/README.md b/ESN/EchoTorch-master/README.md
new file mode 100644
index 0000000..687d771
--- /dev/null
+++ b/ESN/EchoTorch-master/README.md
@@ -0,0 +1,130 @@
+
+
+--------------------------------------------------------------------------------
+
+EchoTorch is a python module based on pyTorch to implement and test
+various flavours of Echo State Network models. EchoTorch is not
+intended to be put into production but for research purposes. As it is
+based on pyTorch, EchoTorch's layers can be integrated into deep
+architectures.
+EchoTorch gives two possible ways to train models :
+* Classical ESN training with Moore Penrose pseudo-inverse or LU decomposition;
+* pyTorch gradient descent optimizer;
+
+
+
+
+
+Join our community to create datasets and deep-learning models! Chat with us on [Gitter](https://gitter.im/EchoTorch/Lobby) and join the [Google Group](https://groups.google.com/forum/#!forum/echotorch/) to collaborate with us.
+
+
+[](https://codecov.io/gh/nschaetti/EchoTorch)
+[](http://echotorch.readthedocs.io/en/latest/?badge=latest&style=flat-square)
+[](https://travis-ci.org/nschaetti/EchoTorch)
+
+This repository consists of:
+
+* echotorch.datasets : Pre-built datasets for common ESN tasks
+* echotorch.models : Generic pretrained ESN models
+* echotorch.transforms : Data transformations specific to echo state networks
+* echotorch.utils : Tools, functions and measures for echo state networks
+
+## Getting started
+
+These instructions will get you a copy of the project up and running
+on your local machine for development and testing purposes.
+See deployment for notes on how to deploy the project on a live system.
+
+### Prerequisites
+
+You need to following package to install EchoTorch.
+
+* pyTorch
+* TorchVision
+
+### Installation
+
+ pip install EchoTorch
+
+## Authors
+
+* **Nils Schaetti** - *Initial work* - [nschaetti](https://github.com/nschaetti/)
+
+## License
+
+This project is licensed under the GPLv3 License - see the [LICENSE](LICENSE) file
+for details.
+
+## Citing
+
+If you find EchoTorch useful for an academic publication, then please use the following BibTeX to cite it:
+
+```
+@misc{echotorch,
+ author = {Schaetti, Nils},
+ title = {EchoTorch: Reservoir Computing with pyTorch},
+ year = {2018},
+ publisher = {GitHub},
+ journal = {GitHub repository},
+ howpublished = {\url{https://github.com/nschaetti/EchoTorch}},
+}
+```
+
+## A short introduction
+
+### Classical ESN training
+
+You can simply create an ESN with the ESN or LiESN objects in the nn
+module.
+
+```python
+esn = etnn.LiESN(
+ input_dim,
+ n_hidden,
+ output_dim,
+ spectral_radius,
+ learning_algo='inv',
+ leaky_rate=leaky_rate
+)
+```
+
+Where
+
+* input_dim is the input dimensionality;
+* h_hidden is the size of the reservoir;
+* output_dim is the output dimensionality;
+* spectral_radius is the spectral radius with a default value of 0.9;
+* learning_algo allows you to choose with training algorithms to use.
+The possible values are inv, LU and sdg;
+
+You now just have to give the ESN the inputs and the attended outputs.
+
+```python
+for data in trainloader:
+ # Inputs and outputs
+ inputs, targets = data
+
+ # To variable
+ inputs, targets = Variable(inputs), Variable(targets)
+
+ # Give the example to EchoTorch
+ esn(inputs, targets)
+# end for
+```
+
+After giving all examples to EchoTorch, you just have to call the
+finalize method.
+
+```python
+esn.finalize()
+```
+
+The model is now trained and you can call the esn object to get a
+prediction.
+
+```python
+predicted = esn(test_input)
+```
+
+### ESN training with Stochastic Gradient Descent
+
diff --git a/ESN/EchoTorch-master/docs/Makefile b/ESN/EchoTorch-master/docs/Makefile
new file mode 100644
index 0000000..17f63a6
--- /dev/null
+++ b/ESN/EchoTorch-master/docs/Makefile
@@ -0,0 +1,20 @@
+# Minimal makefile for Sphinx documentation
+#
+
+# You can set these variables from the command line.
+SPHINXOPTS =
+SPHINXBUILD = sphinx-build
+SPHINXPROJ = EchoTorch
+SOURCEDIR = source
+BUILDDIR = build
+
+# Put it first so that "make" without argument is like "make help".
+help:
+ @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
+
+.PHONY: help Makefile
+
+# Catch-all target: route all unknown targets to Sphinx using the new
+# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
+%: Makefile
+ @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
\ No newline at end of file
diff --git a/ESN/EchoTorch-master/docs/images/echotorch.png b/ESN/EchoTorch-master/docs/images/echotorch.png
new file mode 100644
index 0000000..a3d3901
Binary files /dev/null and b/ESN/EchoTorch-master/docs/images/echotorch.png differ
diff --git a/ESN/EchoTorch-master/docs/images/echotorch_complete.png b/ESN/EchoTorch-master/docs/images/echotorch_complete.png
new file mode 100644
index 0000000..d89df64
Binary files /dev/null and b/ESN/EchoTorch-master/docs/images/echotorch_complete.png differ
diff --git a/ESN/EchoTorch-master/docs/source/conf.py b/ESN/EchoTorch-master/docs/source/conf.py
new file mode 100644
index 0000000..3751f45
--- /dev/null
+++ b/ESN/EchoTorch-master/docs/source/conf.py
@@ -0,0 +1,164 @@
+# -*- coding: utf-8 -*-
+#
+# EchoTorch documentation build configuration file, created by
+# sphinx-quickstart on Thu Apr 6 11:30:46 2017.
+#
+# This file is execfile()d with the current directory set to its
+# containing dir.
+#
+# Note that not all possible configuration values are present in this
+# autogenerated file.
+#
+# All configuration values have a default; values that are commented out
+# serve to show the default.
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#
+import os
+import sys
+import echotorch
+#import sphinx_bootstrap_theme
+sys.path.insert(0, os.path.abspath('../../echotorch'))
+
+
+# -- General configuration ------------------------------------------------
+
+# If your documentation needs a minimal Sphinx version, state it here.
+#
+# needs_sphinx = '1.0'
+
+# Add any Sphinx extension module names here, as strings. They can be
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
+# ones.
+extensions = ['sphinx.ext.autodoc',
+ 'sphinx.ext.todo',
+ 'sphinx.ext.coverage',
+ 'sphinx.ext.mathjax',
+ 'sphinx.ext.githubpages']
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# The suffix(es) of source filenames.
+# You can specify multiple suffix as a list of string:
+#
+# source_suffix = ['.rst', '.md']
+source_suffix = '.rst'
+
+# The master toctree document.
+master_doc = 'index'
+
+# General information about the project.
+project = u'EchoTorch'
+copyright = u'2017, Nils Schaetti'
+author = u'Nils Schaetti'
+
+# The version info for the project you're documenting, acts as replacement for
+# |version| and |release|, also used in various other places throughout the
+# built documents.
+#
+# The short X.Y version.
+version = u'0.1'
+# The full version, including alpha/beta/rc tags.
+release = u'0.1'
+
+# The language for content autogenerated by Sphinx. Refer to documentation
+# for a list of supported languages.
+#
+# This is also used if you do content translation via gettext catalogs.
+# Usually you set "language" from the command line for these cases.
+language = None
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+# This patterns also effect to html_static_path and html_extra_path
+exclude_patterns = []
+
+# The name of the Pygments (syntax highlighting) style to use.
+pygments_style = 'sphinx'
+
+# If true, `todo` and `todoList` produce output, else they produce nothing.
+todo_include_todos = True
+
+
+# -- Options for HTML output ----------------------------------------------
+
+# The theme to use for HTML and HTML Help pages. See the documentation for
+# a list of builtin themes.
+#
+html_theme = 'sphinx_rtd_theme'
+#html_theme = 'bootstrap'
+#html_theme_path = sphinx_bootstrap_theme.get_html_theme_path()
+
+# Theme options are theme-specific and customize the look and feel of a theme
+# further. For a list of options available for each theme, see the
+# documentation.
+#
+# html_theme_options = {}
+
+# Add any paths that contain custom static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+html_static_path = ['_static']
+
+
+# -- Options for HTMLHelp output ------------------------------------------
+
+# Output file base name for HTML help builder.
+htmlhelp_basename = 'EchoTorchdoc'
+
+
+# -- Options for LaTeX output ---------------------------------------------
+
+latex_elements = {
+ # The paper size ('letterpaper' or 'a4paper').
+ #
+ # 'papersize': 'letterpaper',
+
+ # The font size ('10pt', '11pt' or '12pt').
+ #
+ # 'pointsize': '10pt',
+
+ # Additional stuff for the LaTeX preamble.
+ #
+ # 'preamble': '',
+
+ # Latex figure (float) alignment
+ #
+ # 'figure_align': 'htbp',
+}
+
+# Grouping the document tree into LaTeX files. List of tuples
+# (source start file, target name, title,
+# author, documentclass [howto, manual, or own class]).
+latex_documents = [
+ (master_doc, 'EchoTorch.tex', u'EchoTorch Documentation',
+ u'Nils Schaetti', 'manual'),
+]
+
+
+# -- Options for manual page output ---------------------------------------
+
+# One entry per manual page. List of tuples
+# (source start file, name, description, authors, manual section).
+man_pages = [
+ (master_doc, 'echotorch', u'EchoTorch Documentation',
+ [author], 1)
+]
+
+
+# -- Options for Texinfo output -------------------------------------------
+
+# Grouping the document tree into Texinfo files. List of tuples
+# (source start file, target name, title, author,
+# dir menu entry, description, category)
+texinfo_documents = [
+ (master_doc, 'EchoTorch', u'EchoTorch Documentation',
+ author, 'EchoTorch', 'One line description of project.',
+ 'Miscellaneous'),
+]
+
+
+
diff --git a/ESN/EchoTorch-master/docs/source/echotorch.datasets.rst b/ESN/EchoTorch-master/docs/source/echotorch.datasets.rst
new file mode 100644
index 0000000..ec4877e
--- /dev/null
+++ b/ESN/EchoTorch-master/docs/source/echotorch.datasets.rst
@@ -0,0 +1,38 @@
+echotorch\.datasets package
+===========================
+
+Submodules
+----------
+
+echotorch\.datasets\.MackeyGlassDataset module
+----------------------------------------------
+
+.. automodule:: echotorch.datasets.MackeyGlassDataset
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+echotorch\.datasets\.MemTestDataset module
+------------------------------------------
+
+.. automodule:: echotorch.datasets.MemTestDataset
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+echotorch\.datasets\.NARMADataset module
+----------------------------------------
+
+.. automodule:: echotorch.datasets.NARMADataset
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+
+Module contents
+---------------
+
+.. automodule:: echotorch.datasets
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/ESN/EchoTorch-master/docs/source/echotorch.nn.rst b/ESN/EchoTorch-master/docs/source/echotorch.nn.rst
new file mode 100644
index 0000000..6be9d85
--- /dev/null
+++ b/ESN/EchoTorch-master/docs/source/echotorch.nn.rst
@@ -0,0 +1,32 @@
+echotorch.nn
+============
+
+.. automodule:: torch.nn
+.. currentmodule:: torch.nn
+
+Echo State Layers
+-----------------
+
+ESNCell
+~~~~~~~
+
+.. autoclass:: nn.ESNCell
+ :members:
+
+ESN
+~~~
+
+.. autoclass:: nn.ESN
+ :members:
+
+LiESNCell
+~~~~~~~~~
+
+.. autoclass:: nn.LiESNCell
+ :members:
+
+LiESN
+~~~~~
+
+.. autoclass:: nn.LiESN
+ :members:
diff --git a/ESN/EchoTorch-master/docs/source/echotorch.rst b/ESN/EchoTorch-master/docs/source/echotorch.rst
new file mode 100644
index 0000000..aaed1d2
--- /dev/null
+++ b/ESN/EchoTorch-master/docs/source/echotorch.rst
@@ -0,0 +1,19 @@
+echotorch package
+=================
+
+Subpackages
+-----------
+
+.. toctree::
+
+ echotorch.datasets
+ echotorch.nn
+ echotorch.utils
+
+Module contents
+---------------
+
+.. automodule:: echotorch
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/ESN/EchoTorch-master/docs/source/echotorch.utils.rst b/ESN/EchoTorch-master/docs/source/echotorch.utils.rst
new file mode 100644
index 0000000..b41a8e1
--- /dev/null
+++ b/ESN/EchoTorch-master/docs/source/echotorch.utils.rst
@@ -0,0 +1,30 @@
+echotorch\.tools package
+========================
+
+Submodules
+----------
+
+echotorch\.utils\.error_measures module
+---------------------------------
+
+.. automodule:: echotorch.utils.error_measures
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+echotorch\.utils\.utility\_functions module
+-------------------------------------------
+
+.. automodule:: echotorch.utils.utility_functions
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+
+Module contents
+---------------
+
+.. automodule:: echotorch.utils
+ :members:
+ :undoc-members:
+ :show-inheritance:
diff --git a/ESN/EchoTorch-master/docs/source/index.rst b/ESN/EchoTorch-master/docs/source/index.rst
new file mode 100644
index 0000000..36bd1b2
--- /dev/null
+++ b/ESN/EchoTorch-master/docs/source/index.rst
@@ -0,0 +1,32 @@
+.. EchoTorch documentation master file, created by
+ sphinx-quickstart on Thu Apr 6 11:30:46 2017.
+ You can adapt this file completely to your liking, but it should at least
+ contain the root `toctree` directive.
+
+EchoTorch documentation
+=======================
+
+EchoTorch is an pyTorch-based library for Reservoir Computing and Echo State Network using GPUs and CPUs.
+
+.. toctree::
+ :glob:
+ :maxdepth: 1
+ :caption: Notes
+
+ notes/*
+
+.. toctree::
+ :maxdepth: 1
+ :caption: Package Reference
+
+ echotorch
+ echotorch.datasets
+ echotorch.nn
+ echotorch.utils
+
+Indices and tables
+==================
+
+* :ref:`genindex`
+* :ref:`modindex`
+* :ref:`search`
\ No newline at end of file
diff --git a/ESN/EchoTorch-master/docs/source/modules.rst b/ESN/EchoTorch-master/docs/source/modules.rst
new file mode 100644
index 0000000..96cbe13
--- /dev/null
+++ b/ESN/EchoTorch-master/docs/source/modules.rst
@@ -0,0 +1,7 @@
+echotorch
+=========
+
+.. toctree::
+ :maxdepth: 4
+
+ echotorch
diff --git a/ESN/EchoTorch-master/docs/source/notes/esn_learning.rst b/ESN/EchoTorch-master/docs/source/notes/esn_learning.rst
new file mode 100644
index 0000000..06add54
--- /dev/null
+++ b/ESN/EchoTorch-master/docs/source/notes/esn_learning.rst
@@ -0,0 +1,19 @@
+Echo State Network learning mechanics
+=====================================
+
+This note will present an overview of how Echo State Networks works works
+and its learning mechanics. It's not mandatory to understand the complete
+learning phase, but we recommend understanding the differnce between
+classical ESN learning and gradient descent, it will help you to choose
+which one to use according to cases.
+
+.. _esn_model:
+
+The Echo State Network model
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. _esn_learning:
+
+``esn_learning``
+~~~~~~~~~~~~~~~~
+
diff --git a/ESN/EchoTorch-master/echotorch/__init__.py b/ESN/EchoTorch-master/echotorch/__init__.py
new file mode 100644
index 0000000..8016bd8
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/__init__.py
@@ -0,0 +1,12 @@
+# -*- coding: utf-8 -*-
+#
+
+# Imports
+from . import datasets
+from . import models
+from . import nn
+from . import utils
+
+
+# All EchoTorch's modules
+__all__ = ['datasets', 'models', 'nn', 'utils']
diff --git a/ESN/EchoTorch-master/echotorch/datasets/LogisticMapDataset.py b/ESN/EchoTorch-master/echotorch/datasets/LogisticMapDataset.py
new file mode 100644
index 0000000..12fa1e7
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/datasets/LogisticMapDataset.py
@@ -0,0 +1,92 @@
+# -*- coding: utf-8 -*-
+#
+
+# Imports
+import torch
+from torch.utils.data.dataset import Dataset
+import numpy as np
+
+
+# Logistic Map dataset
+class LogisticMapDataset(Dataset):
+ """
+ Logistic Map dataset
+ """
+
+ # Constructor
+ def __init__(self, sample_len, n_samples, alpha=5, beta=11, gamma=13, c=3.6, b=0.13, seed=None):
+ """
+ Constructor
+ :param sample_len:
+ :param n_samples:
+ :param alpha:
+ :param beta:
+ :param gamma:
+ :param c:
+ :param b:
+ :param seed:
+ """
+ # Properties
+ self.sample_len = sample_len
+ self.n_samples = n_samples
+ self.alpha = alpha
+ self.beta = beta
+ self.gamma = gamma
+ self.c = c
+ self.b = b
+ self.p2 = np.pi * 2
+
+ # Init seed if needed
+ if seed is not None:
+ torch.manual_seed(seed)
+ # end if
+ # end __init__
+
+ # Length
+ def __len__(self):
+ """
+ Length
+ :return:
+ """
+ return self.n_samples
+ # end __len__
+
+ # Get item
+ def __getitem__(self, idx):
+ """
+ Get item
+ :param idx:
+ :return:
+ """
+ # Time and forces
+ t = np.linspace(0, 1, self.sample_len, endpoint=0)
+ dforce = np.sin(self.p2 * self.alpha * t) + np.sin(self.p2 * self.beta * t) + np.sin(self.p2 * self.gamma * t)
+
+ # Series
+ series = torch.zeros(self.sample_len, 1)
+ series[0] = 0.6
+
+ # Generate
+ for i in range(1, self.sample_len):
+ series[i] = self._logistic_map(series[i-1], self.c + self.b * dforce[i])
+ # end for
+
+ return series
+ # end __getitem__
+
+ #######################################
+ # Private
+ #######################################
+
+ # Logistic map
+ def _logistic_map(self, x, r):
+ """
+ Logistic map
+ :param x:
+ :param r:
+ :return:
+ """
+ return r * x * (1-x)
+ # end logistic_map
+
+# end MackeyGlassDataset
diff --git a/ESN/EchoTorch-master/echotorch/datasets/MackeyGlassDataset.py b/ESN/EchoTorch-master/echotorch/datasets/MackeyGlassDataset.py
new file mode 100644
index 0000000..e87224c
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/datasets/MackeyGlassDataset.py
@@ -0,0 +1,75 @@
+# -*- coding: utf-8 -*-
+#
+
+# Imports
+import torch
+from torch.utils.data.dataset import Dataset
+import collections
+
+
+# Mackey Glass dataset
+class MackeyGlassDataset(Dataset):
+ """
+ Mackey Glass dataset
+ """
+
+ # Constructor
+ def __init__(self, sample_len, n_samples, tau=17, seed=None):
+ """
+ Constructor
+ :param sample_len: Length of the time-series in time steps.
+ :param n_samples: Number of samples to generate.
+ :param tau: Delay of the MG with commonly used value of tau=17 (mild chaos) and tau=30 is moderate chaos.
+ :param seed: Seed of random number generator.
+ """
+ # Properties
+ self.sample_len = sample_len
+ self.n_samples = n_samples
+ self.tau = tau
+ self.delta_t = 10
+ self.timeseries = 1.2
+ self.history_len = tau * self.delta_t
+
+ # Init seed if needed
+ if seed is not None:
+ torch.manual_seed(seed)
+ # end if
+ # end __init__
+
+ # Length
+ def __len__(self):
+ """
+ Length
+ :return:
+ """
+ return self.n_samples
+ # end __len__
+
+ # Get item
+ def __getitem__(self, idx):
+ """
+ Get item
+ :param idx:
+ :return:
+ """
+ # History
+ history = collections.deque(1.2 * torch.ones(self.history_len) + 0.2 * (torch.rand(self.history_len) - 0.5))
+
+ # Preallocate tensor for time-serie
+ inp = torch.zeros(self.sample_len, 1)
+
+ # For each time step
+ for timestep in range(self.sample_len):
+ for _ in range(self.delta_t):
+ xtau = history.popleft()
+ history.append(self.timeseries)
+ self.timeseries = history[-1] + (0.2 * xtau / (1.0 + xtau ** 10) - 0.1 * history[-1]) / self.delta_t
+ # end for
+ inp[timestep] = self.timeseries
+ # end for
+
+ # Squash timeseries through tanh
+ return torch.tan(inp - 1)
+ # end __getitem__
+
+# end MackeyGlassDataset
diff --git a/ESN/EchoTorch-master/echotorch/datasets/MemTestDataset.py b/ESN/EchoTorch-master/echotorch/datasets/MemTestDataset.py
new file mode 100644
index 0000000..8620125
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/datasets/MemTestDataset.py
@@ -0,0 +1,61 @@
+# -*- coding: utf-8 -*-
+#
+
+# Imports
+import torch
+from torch.utils.data.dataset import Dataset
+
+
+# Generates a series of input timeseries and delayed versions as outputs.
+class MemTestDataset(Dataset):
+ """
+ Generates a series of input timeseries and delayed versions as outputs.
+ Delay is given in number of timesteps. Can be used to empirically measure the
+ memory capacity of a system.
+ """
+
+ # Constructor
+ def __init__(self, sample_len, n_samples, n_delays=10, seed=None):
+ """
+ Constructor
+ :param sample_len: Length of the time-series in time steps.
+ :param n_samples: Number of samples to generate.
+ :param n_delays: Number of step to delay
+ :param seed: Seed of random number generator.
+ """
+ # Properties
+ self.sample_len = sample_len
+ self.n_samples = n_samples
+ self.n_delays = n_delays
+
+ # Init seed if needed
+ if seed is not None:
+ torch.manual_seed(seed)
+ # end if
+ # end __init__
+
+ # Length
+ def __len__(self):
+ """
+ Length
+ :return:
+ """
+ return self.n_samples
+ # end __len__
+
+ # Get item
+ def __getitem__(self, idx):
+ """
+ Get item
+ :param idx:
+ :return:
+ """
+ inputs = (torch.rand(self.sample_len, 1) - 0.5) * 1.6
+ outputs = torch.zeros(self.sample_len, self.n_delays)
+ for k in range(self.n_delays):
+ outputs[:, k:k+1] = torch.cat((torch.zeros(k + 1, 1), inputs[:-k - 1, :]), dim=0)
+ # end for
+ return inputs, outputs
+ # end __getitem__
+
+# end MemTestDataset
diff --git a/ESN/EchoTorch-master/echotorch/datasets/NARMADataset.py b/ESN/EchoTorch-master/echotorch/datasets/NARMADataset.py
new file mode 100644
index 0000000..95368e2
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/datasets/NARMADataset.py
@@ -0,0 +1,105 @@
+# -*- coding: utf-8 -*-
+#
+
+# Imports
+import torch
+from torch.utils.data.dataset import Dataset
+
+
+# 10th order NARMA task
+class NARMADataset(Dataset):
+ """
+ xth order NARMA task
+ WARNING: this is an unstable dataset. There is a small chance the system becomes
+ unstable, leading to an unusable dataset. It is better to use NARMA30 which
+ where this problem happens less often.
+ """
+
+ # Constructor
+ def __init__(self, sample_len, n_samples, system_order=10, seed=None):
+ """
+ Constructor
+ :param sample_len: Length of the time-series in time steps.
+ :param n_samples: Number of samples to generate.
+ :param system_order: th order NARMA
+ :param seed: Seed of random number generator.
+ """
+ # Properties
+ self.sample_len = sample_len
+ self.n_samples = n_samples
+ self.system_order = system_order
+
+ # System order
+ self.parameters = torch.zeros(4)
+ if system_order == 10:
+ self.parameters[0] = 0.3
+ self.parameters[1] = 0.05
+ self.parameters[2] = 9
+ self.parameters[3] = 0.1
+ else:
+ self.parameters[0] = 0.2
+ self.parameters[1] = 0.04
+ self.parameters[2] = 29
+ self.parameters[3] = 0.001
+ # end if
+
+ # Init seed if needed
+ if seed is not None:
+ torch.manual_seed(seed)
+ # end if
+
+ # Generate data set
+ self.inputs, self.outputs = self._generate()
+ # end __init__
+
+ #############################################
+ # OVERRIDE
+ #############################################
+
+ # Length
+ def __len__(self):
+ """
+ Length
+ :return:
+ """
+ return self.n_samples
+ # end __len__
+
+ # Get item
+ def __getitem__(self, idx):
+ """
+ Get item
+ :param idx:
+ :return:
+ """
+ return self.inputs[idx], self.outputs[idx]
+ # end __getitem__
+
+ ##############################################
+ # PRIVATE
+ ##############################################
+
+ # Generate
+ def _generate(self):
+ """
+ Generate dataset
+ :return:
+ """
+ inputs = list()
+ outputs = list()
+ for i in range(self.n_samples):
+ ins = torch.rand(self.sample_len, 1) * 0.5
+ outs = torch.zeros(self.sample_len, 1)
+ for k in range(self.system_order - 1, self.sample_len - 1):
+ outs[k + 1] = self.parameters[0] * outs[k] + self.parameters[1] * outs[k] * torch.sum(
+ outs[k - (self.system_order - 1):k + 1]) + 1.5 * ins[k - int(self.parameters[2])] * ins[k] + \
+ self.parameters[3]
+ # end for
+ inputs.append(ins)
+ outputs.append(outs)
+ # end for
+
+ return inputs, outputs
+ # end _generate
+
+# end NARMADataset
diff --git a/ESN/EchoTorch-master/echotorch/datasets/SwitchAttractorDataset.py b/ESN/EchoTorch-master/echotorch/datasets/SwitchAttractorDataset.py
new file mode 100644
index 0000000..e37b72e
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/datasets/SwitchAttractorDataset.py
@@ -0,0 +1,105 @@
+# -*- coding: utf-8 -*-
+#
+
+# Imports
+import torch
+from torch.utils.data.dataset import Dataset
+import numpy as np
+
+
+# Switch attractor dataset
+class SwitchAttractorDataset(Dataset):
+ """
+ Generate a dataset where the reservoir must switch
+ between two attractors.
+ """
+
+ # Constructor
+ def __init__(self, sample_len, n_samples, seed=None):
+ """
+ Constructor
+ :param sample_len: Length of the time-series in time steps.
+ :param n_samples: Number of samples to generate.
+ :param system_order: th order NARMA
+ :param seed: Seed of random number generator.
+ """
+ # Properties
+ self.sample_len = sample_len
+ self.n_samples = n_samples
+
+ # Init seed if needed
+ if seed is not None:
+ torch.manual_seed(seed)
+ # end if
+
+ # Generate data set
+ self.inputs, self.outputs = self._generate()
+ # end __init__
+
+ #############################################
+ # OVERRIDE
+ #############################################
+
+ # Length
+ def __len__(self):
+ """
+ Length
+ :return:
+ """
+ return self.n_samples
+ # end __len__
+
+ # Get item
+ def __getitem__(self, idx):
+ """
+ Get item
+ :param idx:
+ :return:
+ """
+ return self.inputs[idx], self.outputs[idx]
+ # end __getitem__
+
+ ##############################################
+ # PRIVATE
+ ##############################################
+
+ # Generate
+ def _generate(self):
+ """
+ Generate dataset
+ :return:
+ """
+ inputs = list()
+ outputs = list()
+
+ # Generate each sample
+ for i in range(self.n_samples):
+ # Start end stop
+ start = np.random.randint(0, self.sample_len)
+ stop = np.random.randint(start, start + self.sample_len / 2)
+
+ # Limits
+ if stop >= self.sample_len:
+ stop = self.sample_len - 1
+ # end if
+
+ # Sample tensor
+ inp = torch.zeros(self.sample_len, 1)
+ out = torch.zeros(self.sample_len)
+
+ # Set inputs
+ inp[start, 0] = 1.0
+ inp[stop] = 1.0
+
+ # Set outputs
+ out[start:stop] = 1.0
+
+ # Add
+ inputs.append(inp)
+ outputs.append(out)
+ # end for
+
+ return inputs, outputs
+ # end _generate
+
+# end SwitchAttractorDataset
diff --git a/ESN/EchoTorch-master/echotorch/datasets/__init__.py b/ESN/EchoTorch-master/echotorch/datasets/__init__.py
new file mode 100644
index 0000000..2b9dbb6
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/datasets/__init__.py
@@ -0,0 +1,12 @@
+# -*- coding: utf-8 -*-
+#
+
+# Imports
+from .LogisticMapDataset import LogisticMapDataset
+from .MackeyGlassDataset import MackeyGlassDataset
+from .MemTestDataset import MemTestDataset
+from .NARMADataset import NARMADataset
+
+__all__ = [
+ 'LogisticMapDataset', 'MackeyGlassDataset', 'MemTestDataset', 'NARMADataset'
+]
diff --git a/ESN/EchoTorch-master/echotorch/models/HNilsNet.py b/ESN/EchoTorch-master/echotorch/models/HNilsNet.py
new file mode 100644
index 0000000..d37a14e
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/models/HNilsNet.py
@@ -0,0 +1,50 @@
+# -*- coding: utf-8 -*-
+#
+# File : echotorch/models/NilsNet.py
+# Description : A Hierarchical NilsNet module.
+# Date : 09th of April, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti, University of Neuchâtel
+
+# Imports
+import torchvision
+import torch.nn as nn
+
+
+# A Hierarchical NilsNet
+class HNilsNet(nn.Module):
+ """
+ A Hierarchical NilsNet
+ """
+
+ # Constructor
+ def __init__(self):
+ """
+ Constructor
+ """
+ pass
+ # end __init__
+
+ # Forward
+ def forward(self):
+ """
+ Forward
+ :return:
+ """
+ pass
+ # end forward
+
+# end HNilsNet
diff --git a/ESN/EchoTorch-master/echotorch/models/NilsNet.py b/ESN/EchoTorch-master/echotorch/models/NilsNet.py
new file mode 100644
index 0000000..0c9381f
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/models/NilsNet.py
@@ -0,0 +1,78 @@
+# -*- coding: utf-8 -*-
+#
+# File : echotorch/models/NilsNet.py
+# Description : An NilsNet module.
+# Date : 09th of April, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti, University of Neuchâtel
+
+# Imports
+import torchvision
+import torch.nn as nn
+import torch.legacy.nn as lnn
+from echotorch import nn as ecnn
+
+
+# A NilsNet
+class NilsNet(nn.Module):
+ """
+ A NilsNet
+ """
+
+ # Constructor
+ def __init__(self, reservoir_dim, sfa_dim, ica_dim, pretrained=False, feature_selector='resnet18'):
+ """
+ Constructor
+ """
+ # Upper class
+ super(NilsNet, self).__init__()
+
+ # ResNet
+ if feature_selector == 'resnet18':
+ self.feature_selector = torchvision.models.resnet18(pretrained=True)
+ elif feature_selector == 'resnet34':
+ self.feature_selector = torchvision.models.resnet34(pretrained=True)
+ elif feature_selector == 'resnet50':
+ self.feature_selector = torchvision.models.resnet50(pretrained=True)
+ elif feature_selector == 'alexnet':
+ self.feature_selector = torchvision.models.alexnet(pretrained=True)
+ # end if
+
+ # Skip last layer
+ self.reservoir_input_dim = self.feature_selector.fc.in_features
+ self.feature_selector.fc = ecnn.Identity()
+
+ # Echo State Network
+ # self.esn = ecnn.ESNCell(input_dim=self.reservoir_input_dim, output_dim=reservoir_dim)
+
+ # Slow feature analysis layer
+ # self.sfa = ecnn.SFACell(input_dim=reservoir_dim, output_dim=sfa_dim)
+
+ # Independent Feature Analysis layer
+ # self.ica = ecnn.ICACell(input_dim=sfa_dim, output_dim=ica_dim)
+ # end __init__
+
+ # Forward
+ def forward(self, x):
+ """
+ Forward
+ :return:
+ """
+ # ResNet
+ return self.feature_selector(x)
+ # end forward
+
+# end NilsNet
diff --git a/ESN/EchoTorch-master/echotorch/models/__init__.py b/ESN/EchoTorch-master/echotorch/models/__init__.py
new file mode 100644
index 0000000..83464dd
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/models/__init__.py
@@ -0,0 +1,24 @@
+# -*- coding: utf-8 -*-
+#
+# File : echotorch/models/__init__.py
+# Description : Models init.
+# Date : 09th of April, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti, University of Neuchâtel
+
+# Imports
+from .HNilsNet import HNilsNet
+from .NilsNet import NilsNet
diff --git a/ESN/EchoTorch-master/echotorch/nn/BDESN.py b/ESN/EchoTorch-master/echotorch/nn/BDESN.py
new file mode 100644
index 0000000..8761444
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/nn/BDESN.py
@@ -0,0 +1,197 @@
+# -*- coding: utf-8 -*-
+#
+# File : echotorch/nn/ESN.py
+# Description : An Echo State Network module.
+# Date : 26th of January, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti, University of Neuchâtel
+
+"""
+Created on 26 January 2018
+@author: Nils Schaetti
+"""
+
+# Imports
+import torch.sparse
+import torch
+import torch.nn as nn
+from torch.autograd import Variable
+from .BDESNCell import BDESNCell
+from .RRCell import RRCell
+
+
+# Bi-directional Echo State Network module
+class BDESN(nn.Module):
+ """
+ Bi-directional Echo State Network module
+ """
+
+ # Constructor
+ def __init__(self, input_dim, hidden_dim, output_dim, leaky_rate=1.0, spectral_radius=0.9, bias_scaling=0,
+ input_scaling=1.0, w=None, w_in=None, w_bias=None, sparsity=None, input_set=[1.0, -1.0],
+ w_sparsity=None, nonlin_func=torch.tanh, learning_algo='inv', ridge_param=0.0, create_cell=True):
+ """
+ Constructor
+ :param input_dim: Inputs dimension.
+ :param hidden_dim: Hidden layer dimension
+ :param output_dim: Reservoir size
+ :param spectral_radius: Reservoir's spectral radius
+ :param bias_scaling: Scaling of the bias, a constant input to each neuron (default: 0, no bias)
+ :param input_scaling: Scaling of the input weight matrix, default 1.
+ :param w: Internal weights matrix
+ :param w_in: Input-reservoir weights matrix
+ :param w_bias: Bias weights matrix
+ :param sparsity:
+ :param input_set:
+ :param w_sparsity:
+ :param nonlin_func: Reservoir's activation function (tanh, sig, relu)
+ :param learning_algo: Which learning algorithm to use (inv, LU, grad)
+ """
+ super(BDESN, self).__init__()
+
+ # Properties
+ self.output_dim = output_dim
+
+ # Recurrent layer
+ if create_cell:
+ self.esn_cell = BDESNCell(
+ input_dim=input_dim, hidden_dim=hidden_dim, spectral_radius=spectral_radius, bias_scaling=bias_scaling,
+ input_scaling=input_scaling, w=w, w_in=w_in, w_bias=w_bias, sparsity=sparsity, input_set=input_set,
+ w_sparsity=w_sparsity, nonlin_func=nonlin_func, leaky_rate=leaky_rate, create_cell=create_cell
+ )
+ # end if
+
+ # Ouput layer
+ self.output = RRCell(
+ input_dim=hidden_dim * 2, output_dim=output_dim, ridge_param=ridge_param, learning_algo=learning_algo
+ )
+ # end __init__
+
+ ###############################################
+ # PROPERTIES
+ ###############################################
+
+ # Hidden layer
+ @property
+ def hidden(self):
+ """
+ Hidden layer
+ :return:
+ """
+ return self.esn_cell.hidden
+ # end hidden
+
+ # Hidden weight matrix
+ @property
+ def w(self):
+ """
+ Hidden weight matrix
+ :return:
+ """
+ return self.esn_cell.w
+ # end w
+
+ # Input matrix
+ @property
+ def w_in(self):
+ """
+ Input matrix
+ :return:
+ """
+ return self.esn_cell.w_in
+ # end w_in
+
+ ###############################################
+ # PUBLIC
+ ###############################################
+
+ # Reset learning
+ def reset(self):
+ """
+ Reset learning
+ :return:
+ """
+ # Reset output layer
+ self.output.reset()
+
+ # Training mode again
+ self.train(True)
+ # end reset
+
+ # Output matrix
+ def get_w_out(self):
+ """
+ Output matrix
+ :return:
+ """
+ return self.output.w_out
+ # end get_w_out
+
+ # Set W
+ def set_w(self, w):
+ """
+ Set W
+ :param w:
+ :return:
+ """
+ self.esn_cell.w = w
+ # end set_w
+
+ # Forward
+ def forward(self, u, y=None):
+ """
+ Forward
+ :param u: Input signal.
+ :return: Output or hidden states
+ """
+ # Compute hidden states
+ hidden_states = self.esn_cell(u)
+
+ # Learning algorithm
+ return self.output(hidden_states, y)
+ # end forward
+
+ # Finish training
+ def finalize(self):
+ """
+ Finalize training with LU factorization
+ """
+ # Finalize output training
+ self.output.finalize()
+
+ # Not in training mode anymore
+ self.train(False)
+ # end finalize
+
+ # Reset hidden layer
+ def reset_hidden(self):
+ """
+ Reset hidden layer
+ :return:
+ """
+ self.esn_cell.reset_hidden()
+ # end reset_hidden
+
+ # Get W's spectral radius
+ def get_spectral_radius(self):
+ """
+ Get W's spectral radius
+ :return: W's spectral radius
+ """
+ return self.esn_cell.get_spectral_raduis()
+ # end spectral_radius
+
+# end BDESN
diff --git a/ESN/EchoTorch-master/echotorch/nn/BDESNCell.py b/ESN/EchoTorch-master/echotorch/nn/BDESNCell.py
new file mode 100644
index 0000000..219989e
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/nn/BDESNCell.py
@@ -0,0 +1,179 @@
+# -*- coding: utf-8 -*-
+#
+# File : echotorch/nn/ESN.py
+# Description : An Echo State Network module.
+# Date : 26th of January, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti, University of Neuchâtel
+
+"""
+Created on 26 January 2018
+@author: Nils Schaetti
+"""
+
+# Imports
+import torch.sparse
+import torch
+import torch.nn as nn
+from .LiESNCell import LiESNCell
+import numpy as np
+from torch.autograd import Variable
+
+
+# Bi-directional Echo State Network module
+class BDESNCell(nn.Module):
+ """
+ Bi-directional Echo State Network module
+ """
+
+ # Constructor
+ def __init__(self, input_dim, hidden_dim, spectral_radius=0.9, bias_scaling=0, input_scaling=1.0,
+ w=None, w_in=None, w_bias=None, sparsity=None, input_set=[1.0, -1.0], w_sparsity=None,
+ nonlin_func=torch.tanh, leaky_rate=1.0, create_cell=True):
+ """
+ Constructor
+ :param input_dim: Inputs dimension.
+ :param hidden_dim: Hidden layer dimension
+ :param spectral_radius: Reservoir's spectral radius
+ :param bias_scaling: Scaling of the bias, a constant input to each neuron (default: 0, no bias)
+ :param input_scaling: Scaling of the input weight matrix, default 1.
+ :param w: Internal weights matrix
+ :param w_in: Input-reservoir weights matrix
+ :param w_bias: Bias weights matrix
+ :param sparsity:
+ :param input_set:
+ :param w_sparsity:
+ :param nonlin_func: Reservoir's activation function (tanh, sig, relu)
+ """
+ super(BDESNCell, self).__init__()
+
+ # Recurrent layer
+ if create_cell:
+ self.esn_cell = LiESNCell(leaky_rate, False, input_dim, hidden_dim, spectral_radius, bias_scaling,
+ input_scaling, w, w_in, w_bias, None, sparsity, input_set, w_sparsity,
+ nonlin_func)
+ # end if
+ # end __init__
+
+ ###############################################
+ # PROPERTIES
+ ###############################################
+
+ # Hidden weight matrix
+ @property
+ def w(self):
+ """
+ Hidden weight matrix
+ :return:
+ """
+ return self.esn_cell.w
+ # end w
+
+ # Input matrix
+ @property
+ def w_in(self):
+ """
+ Input matrix
+ :return:
+ """
+ return self.esn_cell.w_in
+ # end w_in
+
+ ###############################################
+ # PUBLIC
+ ###############################################
+
+ # Reset learning
+ def reset(self):
+ """
+ Reset learning
+ :return:
+ """
+ # Reset output layer
+ self.output.reset()
+
+ # Training mode again
+ self.train(True)
+ # end reset
+
+ # Output matrix
+ def get_w_out(self):
+ """
+ Output matrix
+ :return:
+ """
+ return self.output.w_out
+ # end get_w_out
+
+ # Set W
+ def set_w(self, w):
+ """
+ Set W
+ :param w:
+ :return:
+ """
+ self.esn_cell.w = w
+ # end set_w
+
+ # Forward
+ def forward(self, u, y=None):
+ """
+ Forward
+ :param u: Input signal.
+ :param y: Target outputs
+ :return: Output or hidden states
+ """
+ # Forward compute hidden states
+ forward_hidden_states = self.esn_cell(u)
+
+ # Backward compute hidden states
+ backward_hidden_states = self.esn_cell(Variable(torch.from_numpy(np.flip(u.data.numpy(), 1).copy())))
+ backward_hidden_states = Variable(torch.from_numpy(np.flip(backward_hidden_states.data.numpy(), 1).copy()))
+
+ return torch.cat((forward_hidden_states, backward_hidden_states), dim=2)
+ # end forward
+
+ # Finish training
+ def finalize(self):
+ """
+ Finalize training with LU factorization
+ """
+ # Finalize output training
+ self.output.finalize()
+
+ # Not in training mode anymore
+ self.train(False)
+ # end finalize
+
+ # Reset hidden layer
+ def reset_hidden(self):
+ """
+ Reset hidden layer
+ :return:
+ """
+ self.esn_cell.reset_hidden()
+ # end reset_hidden
+
+ # Get W's spectral radius
+ def get_spectral_radius(self):
+ """
+ Get W's spectral radius
+ :return: W's spectral radius
+ """
+ return self.esn_cell.get_spectral_raduis()
+ # end spectral_radius
+
+# end BDESNCell
diff --git a/ESN/EchoTorch-master/echotorch/nn/BDESNPCA.py b/ESN/EchoTorch-master/echotorch/nn/BDESNPCA.py
new file mode 100644
index 0000000..52c660f
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/nn/BDESNPCA.py
@@ -0,0 +1,209 @@
+# -*- coding: utf-8 -*-
+#
+# File : echotorch/nn/ESN.py
+# Description : An Echo State Network module.
+# Date : 26th of January, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti, University of Neuchâtel
+
+"""
+Created on 26 January 2018
+@author: Nils Schaetti
+"""
+
+# Imports
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+from .BDESNCell import BDESNCell
+from sklearn.decomposition import IncrementalPCA
+import matplotlib.pyplot as plt
+from torch.autograd import Variable
+
+
+# Bi-directional Echo State Network module with PCA reduction
+class BDESNPCA(nn.Module):
+ """
+ Bi-directional Echo State Network module with PCA reduction
+ """
+
+ # Constructor
+ def __init__(self, input_dim, hidden_dim, output_dim, pca_dim, linear_dim, leaky_rate=1.0, spectral_radius=0.9, bias_scaling=0,
+ input_scaling=1.0, w=None, w_in=None, w_bias=None, sparsity=None, input_set=[1.0, -1.0],
+ w_sparsity=None, nonlin_func=torch.tanh, learning_algo='inv', ridge_param=0.0, create_cell=True,
+ pca_batch_size=10):
+ """
+ Constructor
+ :param input_dim: Inputs dimension.
+ :param hidden_dim: Hidden layer dimension
+ :param output_dim: Reservoir size
+ :param spectral_radius: Reservoir's spectral radius
+ :param bias_scaling: Scaling of the bias, a constant input to each neuron (default: 0, no bias)
+ :param input_scaling: Scaling of the input weight matrix, default 1.
+ :param w: Internal weights matrix
+ :param w_in: Input-reservoir weights matrix
+ :param w_bias: Bias weights matrix
+ :param sparsity:
+ :param input_set:
+ :param w_sparsity:
+ :param nonlin_func: Reservoir's activation function (tanh, sig, relu)
+ :param learning_algo: Which learning algorithm to use (inv, LU, grad)
+ """
+ super(BDESNPCA, self).__init__()
+
+ # Properties
+ self.output_dim = output_dim
+ self.pca_dim = pca_dim
+
+ # Recurrent layer
+ if create_cell:
+ self.esn_cell = BDESNCell(
+ input_dim=input_dim, hidden_dim=hidden_dim, spectral_radius=spectral_radius, bias_scaling=bias_scaling,
+ input_scaling=input_scaling, w=w, w_in=w_in, w_bias=w_bias, sparsity=sparsity, input_set=input_set,
+ w_sparsity=w_sparsity, nonlin_func=nonlin_func, leaky_rate=leaky_rate, create_cell=create_cell
+ )
+ # end if
+
+ # PCA
+ self.ipca = IncrementalPCA(n_components=pca_dim, batch_size=pca_batch_size)
+
+ # FFNN output
+ self.linear1 = nn.Linear(pca_dim, linear_dim)
+ self.linear2 = nn.Linear(linear_dim, output_dim)
+ # end __init__
+
+ ###############################################
+ # PROPERTIES
+ ###############################################
+
+ # Hidden layer
+ @property
+ def hidden(self):
+ """
+ Hidden layer
+ :return:
+ """
+ return self.esn_cell.hidden
+ # end hidden
+
+ # Hidden weight matrix
+ @property
+ def w(self):
+ """
+ Hidden weight matrix
+ :return:
+ """
+ return self.esn_cell.w
+ # end w
+
+ # Input matrix
+ @property
+ def w_in(self):
+ """
+ Input matrix
+ :return:
+ """
+ return self.esn_cell.w_in
+ # end w_in
+
+ ###############################################
+ # PUBLIC
+ ###############################################
+
+ # Reset learning
+ def reset(self):
+ """
+ Reset learning
+ :return:
+ """
+ # Reset output layer
+ self.output.reset()
+
+ # Training mode again
+ self.train(True)
+ # end reset
+
+ # Output matrix
+ def get_w_out(self):
+ """
+ Output matrix
+ :return:
+ """
+ return self.output.w_out
+ # end get_w_out
+
+ # Set W
+ def set_w(self, w):
+ """
+ Set W
+ :param w:
+ :return:
+ """
+ self.esn_cell.w = w
+ # end set_w
+
+ # Forward
+ def forward(self, u, y=None):
+ """
+ Forward
+ :param u: Input signal.
+ :return: Output or hidden states
+ """
+ # Compute hidden states
+ hidden_states = self.esn_cell(u)
+
+ # Resulting reduced stated
+ pca_states = torch.zeros(1, hidden_states.size(1), self.pca_dim)
+
+ # For each batch
+ pca_states[0] = torch.from_numpy(self.ipca.fit_transform(hidden_states.data[0].numpy()).copy())
+ pca_states = Variable(pca_states)
+
+ # FFNN output
+ return F.relu(self.linear2(F.relu(self.linear1(pca_states))))
+ # end forward
+
+ # Finish training
+ def finalize(self):
+ """
+ Finalize training with LU factorization
+ """
+ # Finalize output training
+ self.output.finalize()
+
+ # Not in training mode anymore
+ self.train(False)
+ # end finalize
+
+ # Reset hidden layer
+ def reset_hidden(self):
+ """
+ Reset hidden layer
+ :return:
+ """
+ self.esn_cell.reset_hidden()
+ # end reset_hidden
+
+ # Get W's spectral radius
+ def get_spectral_radius(self):
+ """
+ Get W's spectral radius
+ :return: W's spectral radius
+ """
+ return self.esn_cell.get_spectral_raduis()
+ # end spectral_radius
+
+# end BDESNPCA
diff --git a/ESN/EchoTorch-master/echotorch/nn/EESN.py b/ESN/EchoTorch-master/echotorch/nn/EESN.py
new file mode 100644
index 0000000..339d1b1
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/nn/EESN.py
@@ -0,0 +1,113 @@
+# -*- coding: utf-8 -*-
+#
+# File : echotorch/nn/EESN.py
+# Description : An ESN with an embedding layer at the beginning.
+# Date : 22 March, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti
+
+import torch
+import torch.sparse
+import torch.nn as nn
+from .LiESN import LiESN
+
+
+# An ESN with an embedding layer
+class EESN(object):
+ """
+ An ESN with an embedding layer
+ """
+
+ # Constructor
+ def __init__(self, voc_size, embedding_dim, hidden_dim, output_dim, spectral_radius=0.9,
+ bias_scaling=0, input_scaling=1.0, w=None, w_in=None, w_bias=None, sparsity=None,
+ input_set=[1.0, -1.0], w_sparsity=None, nonlin_func=torch.tanh, learning_algo='inv', ridge_param=0.0,
+ leaky_rate=1.0, train_leaky_rate=False, feedbacks=False, wfdb_sparsity=None,
+ normalize_feedbacks=False):
+ # Embedding layer
+ self.embedding = nn.Embedding(voc_size, embedding_dim)
+
+ # Li-ESN
+ self.esn = LiESN(embedding_dim, hidden_dim, output_dim, spectral_radius, bias_scaling, input_scaling,
+ w, w_in, w_bias, sparsity, input_set, w_sparsity, nonlin_func, learning_algo, ridge_param,
+ leaky_rate, train_leaky_rate, feedbacks, wfdb_sparsity, normalize_feedbacks)
+ # end __init__
+
+ ###############################################
+ # PROPERTIES
+ ###############################################
+
+ # Hidden layer
+ @property
+ def hidden(self):
+ """
+ Hidden layer
+ :return:
+ """
+ return self.esn.hidden
+
+ # end hidden
+
+ # Hidden weight matrix
+ @property
+ def w(self):
+ """
+ Hidden weight matrix
+ :return:
+ """
+ return self.esn.w
+
+ # end w
+
+ # Input matrix
+ @property
+ def w_in(self):
+ """
+ Input matrix
+ :return:
+ """
+ return self.esn.w_in
+ # end w_in
+
+ # Embedding weights
+ @property
+ def weights(self):
+ """
+ Embedding weights
+ :return:
+ """
+ return self.embedding.weight
+ # end weights
+
+ ###############################################
+ # PUBLIC
+ ###############################################
+
+ # Forward
+ def forward(self, u, y=None):
+ """
+ Forward
+ :param x:
+ :return:
+ """
+ # Embedding layer
+ emb = self.embedding(u)
+
+ # ESN
+ return self.esn(emb, y)
+ # end forward
+
+# end EESN
diff --git a/ESN/EchoTorch-master/echotorch/nn/ESN.py b/ESN/EchoTorch-master/echotorch/nn/ESN.py
new file mode 100644
index 0000000..53fcbae
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/nn/ESN.py
@@ -0,0 +1,205 @@
+# -*- coding: utf-8 -*-
+#
+# File : echotorch/nn/ESN.py
+# Description : An Echo State Network module.
+# Date : 26th of January, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti, University of Neuchâtel
+
+"""
+Created on 26 January 2018
+@author: Nils Schaetti
+"""
+
+# Imports
+import torch.sparse
+import torch
+import torch.nn as nn
+from torch.autograd import Variable
+from . import ESNCell
+from .RRCell import RRCell
+
+
+# Echo State Network module
+class ESN(nn.Module):
+ """
+ Echo State Network module
+ """
+
+ # Constructor
+ def __init__(self, input_dim, hidden_dim, output_dim, spectral_radius=0.9, bias_scaling=0, input_scaling=1.0,
+ w=None, w_in=None, w_bias=None, w_fdb=None, sparsity=None, input_set=[1.0, -1.0], w_sparsity=None,
+ nonlin_func=torch.tanh, learning_algo='inv', ridge_param=0.0, create_cell=True,
+ feedbacks=False, with_bias=True, wfdb_sparsity=None, normalize_feedbacks=False):
+ """
+ Constructor
+ :param input_dim: Inputs dimension.
+ :param hidden_dim: Hidden layer dimension
+ :param output_dim: Reservoir size
+ :param spectral_radius: Reservoir's spectral radius
+ :param bias_scaling: Scaling of the bias, a constant input to each neuron (default: 0, no bias)
+ :param input_scaling: Scaling of the input weight matrix, default 1.
+ :param w: Internation weights matrix
+ :param w_in: Input-reservoir weights matrix
+ :param w_bias: Bias weights matrix
+ :param w_fdb: Feedback weights matrix
+ :param sparsity:
+ :param input_set:
+ :param w_sparsity:
+ :param nonlin_func: Reservoir's activation function (tanh, sig, relu)
+ :param learning_algo: Which learning algorithm to use (inv, LU, grad)
+ """
+ super(ESN, self).__init__()
+
+ # Properties
+ self.output_dim = output_dim
+ self.feedbacks = feedbacks
+ self.with_bias = with_bias
+ self.normalize_feedbacks = normalize_feedbacks
+
+ # Recurrent layer
+ if create_cell:
+ self.esn_cell = ESNCell(input_dim, hidden_dim, spectral_radius, bias_scaling, input_scaling, w, w_in,
+ w_bias, w_fdb, sparsity, input_set, w_sparsity, nonlin_func, feedbacks, output_dim,
+ wfdb_sparsity, normalize_feedbacks)
+ # end if
+
+ # Ouput layer
+ self.output = RRCell(hidden_dim, output_dim, ridge_param, feedbacks, with_bias, learning_algo)
+ # end __init__
+
+ ###############################################
+ # PROPERTIES
+ ###############################################
+
+ # Hidden layer
+ @property
+ def hidden(self):
+ """
+ Hidden layer
+ :return:
+ """
+ return self.esn_cell.hidden
+ # end hidden
+
+ # Hidden weight matrix
+ @property
+ def w(self):
+ """
+ Hidden weight matrix
+ :return:
+ """
+ return self.esn_cell.w
+ # end w
+
+ # Input matrix
+ @property
+ def w_in(self):
+ """
+ Input matrix
+ :return:
+ """
+ return self.esn_cell.w_in
+ # end w_in
+
+ ###############################################
+ # PUBLIC
+ ###############################################
+
+ # Reset learning
+ def reset(self):
+ """
+ Reset learning
+ :return:
+ """
+ # Reset output layer
+ self.output.reset()
+
+ # Training mode again
+ self.train(True)
+ # end reset
+
+ # Output matrix
+ def get_w_out(self):
+ """
+ Output matrix
+ :return:
+ """
+ return self.output.w_out
+ # end get_w_out
+
+ # Set W
+ def set_w(self, w):
+ """
+ Set W
+ :param w:
+ :return:
+ """
+ self.esn_cell.w = w
+ # end set_w
+
+ # Forward
+ def forward(self, u, y=None):
+ """
+ Forward
+ :param u: Input signal.
+ :param y: Target outputs
+ :return: Output or hidden states
+ """
+ # Compute hidden states
+ if self.feedbacks and self.training:
+ hidden_states = self.esn_cell(u, y)
+ elif self.feedbacks and not self.training:
+ hidden_states = self.esn_cell(u, w_out=self.output.w_out)
+ else:
+ hidden_states = self.esn_cell(u)
+ # end if
+
+ # Learning algo
+ return self.output(hidden_states, y)
+ # end forward
+
+ # Finish training
+ def finalize(self):
+ """
+ Finalize training with LU factorization
+ """
+ # Finalize output training
+ self.output.finalize()
+
+ # Not in training mode anymore
+ self.train(False)
+ # end finalize
+
+ # Reset hidden layer
+ def reset_hidden(self):
+ """
+ Reset hidden layer
+ :return:
+ """
+ self.esn_cell.reset_hidden()
+ # end reset_hidden
+
+ # Get W's spectral radius
+ def get_spectral_radius(self):
+ """
+ Get W's spectral radius
+ :return: W's spectral radius
+ """
+ return self.esn_cell.get_spectral_raduis()
+ # end spectral_radius
+
+# end ESNCell
diff --git a/ESN/EchoTorch-master/echotorch/nn/ESNCell.py b/ESN/EchoTorch-master/echotorch/nn/ESNCell.py
new file mode 100644
index 0000000..64f4364
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/nn/ESNCell.py
@@ -0,0 +1,373 @@
+# -*- coding: utf-8 -*-
+#
+# File : echotorch/nn/ESNCell.py
+# Description : An Echo State Network layer.
+# Date : 26th of January, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti
+
+"""
+Created on 26 January 2018
+@author: Nils Schaetti
+"""
+
+import torch
+import torch.sparse
+from torch.autograd import Variable
+import torch.nn as nn
+import echotorch.utils
+import numpy as np
+
+
+# Echo State Network layer
+class ESNCell(nn.Module):
+ """
+ Echo State Network layer
+ """
+
+ # Constructor
+ def __init__(self, input_dim, output_dim, spectral_radius=0.9, bias_scaling=0, input_scaling=1.0, w=None, w_in=None,
+ w_bias=None, w_fdb=None, sparsity=None, input_set=[1.0, -1.0], w_sparsity=None,
+ nonlin_func=torch.tanh, feedbacks=False, feedbacks_dim=None, wfdb_sparsity=None,
+ normalize_feedbacks=False):
+ """
+ Constructor
+ :param input_dim: Inputs dimension.
+ :param output_dim: Reservoir size
+ :param spectral_radius: Reservoir's spectral radius
+ :param bias_scaling: Scaling of the bias, a constant input to each neuron (default: 0, no bias)
+ :param input_scaling: Scaling of the input weight matrix, default 1.
+ :param w: Internation weights matrix
+ :param w_in: Input-reservoir weights matrix
+ :param w_bias: Bias weights matrix
+ :param sparsity:
+ :param input_set:
+ :param w_sparsity:
+ :param nonlin_func: Reservoir's activation function (tanh, sig, relu)
+ """
+ super(ESNCell, self).__init__()
+
+ # Params
+ self.input_dim = input_dim
+ self.output_dim = output_dim
+ self.spectral_radius = spectral_radius
+ self.bias_scaling = bias_scaling
+ self.input_scaling = input_scaling
+ self.sparsity = sparsity
+ self.input_set = input_set
+ self.w_sparsity = w_sparsity
+ self.nonlin_func = nonlin_func
+ self.feedbacks = feedbacks
+ self.feedbacks_dim = feedbacks_dim
+ self.wfdb_sparsity = wfdb_sparsity
+ self.normalize_feedbacks = normalize_feedbacks
+
+ # Init hidden state
+ self.register_buffer('hidden', self.init_hidden())
+
+ # Initialize input weights
+ self.register_buffer('w_in', self._generate_win(w_in))
+
+ # Initialize reservoir weights randomly
+ self.register_buffer('w', self._generate_w(w))
+
+ # Initialize bias
+ self.register_buffer('w_bias', self._generate_wbias(w_bias))
+
+ # Initialize feedbacks weights randomly
+ if feedbacks:
+ self.register_buffer('w_fdb', self._generate_wfdb(w_fdb))
+ # end if
+ # end __init__
+
+ ###############################################
+ # PUBLIC
+ ###############################################
+
+ # Forward
+ def forward(self, u, y=None, w_out=None):
+ """
+ Forward
+ :param u: Input signal
+ :param y: Target output signal for teacher forcing
+ :param w_out: Output weights for teacher forcing
+ :return: Resulting hidden states
+ """
+ # Time length
+ time_length = int(u.size()[1])
+
+ # Number of batches
+ n_batches = int(u.size()[0])
+
+ # Outputs
+ outputs = Variable(torch.zeros(n_batches, time_length, self.output_dim))
+ outputs = outputs.cuda() if self.hidden.is_cuda else outputs
+
+ # For each batch
+ for b in range(n_batches):
+ # Reset hidden layer
+ self.reset_hidden()
+
+ # For each steps
+ for t in range(time_length):
+ # Current input
+ ut = u[b, t]
+
+ # Compute input layer
+ u_win = self.w_in.mv(ut)
+
+ # Apply W to x
+ x_w = self.w.mv(self.hidden)
+
+ # Feedback or not
+ if self.feedbacks and self.training and y is not None:
+ # Current target
+ yt = y[b, t]
+
+ # Compute feedback layer
+ y_wfdb = self.w_fdb.mv(yt)
+
+ # Add everything
+ x = u_win + x_w + y_wfdb + self.w_bias
+ elif self.feedbacks and not self.training and w_out is not None:
+ # Add bias
+ bias_hidden = torch.cat((Variable(torch.ones(1)), self.hidden), dim=0)
+
+ # Compute past output
+ yt = w_out.t().mv(bias_hidden)
+
+ # Normalize
+ if self.normalize_feedbacks:
+ yt -= torch.min(yt)
+ yt /= torch.max(yt) - torch.min(yt)
+ yt /= torch.sum(yt)
+ # end if
+
+ # Compute feedback layer
+ y_wfdb = self.w_fdb.mv(yt)
+
+ # Add everything
+ x = u_win + x_w + y_wfdb + self.w_bias
+ else:
+ # Add everything
+ x = u_win + x_w + self.w_bias
+ # end if
+
+ # Apply activation function
+ x = self.nonlin_func(x)
+
+ # Add to outputs
+ self.hidden.data = x.view(self.output_dim).data
+
+ # New last state
+ outputs[b, t] = self.hidden
+ # end for
+ # end for
+
+ return outputs
+ # end forward
+
+ # Init hidden layer
+ def init_hidden(self):
+ """
+ Init hidden layer
+ :return: Initiated hidden layer
+ """
+ return Variable(torch.zeros(self.output_dim), requires_grad=False)
+ # return torch.zeros(self.output_dim)
+ # end init_hidden
+
+ # Reset hidden layer
+ def reset_hidden(self):
+ """
+ Reset hidden layer
+ :return:
+ """
+ self.hidden.fill_(0.0)
+ # end reset_hidden
+
+ # Get W's spectral radius
+ def get_spectral_radius(self):
+ """
+ Get W's spectral radius
+ :return: W's spectral radius
+ """
+ return echotorch.utils.spectral_radius(self.w)
+ # end spectral_radius
+
+ ###############################################
+ # PRIVATE
+ ###############################################
+
+ # Generate W matrix
+ def _generate_w(self, w):
+ """
+ Generate W matrix
+ :return:
+ """
+ # Initialize reservoir weight matrix
+ if w is None:
+ w = self.generate_w(self.output_dim, self.w_sparsity)
+ else:
+ if callable(w):
+ w = w(self.output_dim)
+ # end if
+ # end if
+
+ # Scale it to spectral radius
+ w *= self.spectral_radius / echotorch.utils.spectral_radius(w)
+
+ return Variable(w, requires_grad=False)
+ # end generate_W
+
+ # Generate Win matrix
+ def _generate_win(self, w_in):
+ """
+ Generate Win matrix
+ :return:
+ """
+ # Initialize input weight matrix
+ if w_in is None:
+ if self.sparsity is None:
+ w_in = self.input_scaling * (
+ np.random.randint(0, 2, (self.output_dim, self.input_dim)) * 2.0 - 1.0)
+ w_in = torch.from_numpy(w_in.astype(np.float32))
+ else:
+ w_in = self.input_scaling * np.random.choice(np.append([0], self.input_set),
+ (self.output_dim, self.input_dim),
+ p=np.append([1.0 - self.sparsity],
+ [self.sparsity / len(self.input_set)] * len(
+ self.input_set)))
+ w_in = torch.from_numpy(w_in.astype(np.float32))
+ # end if
+ else:
+ if callable(w_in):
+ w_in = w_in(self.output_dim, self.input_dim)
+ # end if
+ # end if
+
+ return Variable(w_in, requires_grad=False)
+ # end _generate_win
+
+ # Generate Wbias matrix
+ def _generate_wbias(self, w_bias):
+ """
+ Generate Wbias matrix
+ :return:
+ """
+ # Initialize bias matrix
+ if w_bias is None:
+ w_bias = self.bias_scaling * (torch.rand(1, self.output_dim) * 2.0 - 1.0)
+ else:
+ if callable((w_bias)):
+ w_bias = w_bias(self.output_dim)
+ # end if
+ # end if
+
+ return Variable(w_bias, requires_grad=False)
+ # end _generate_wbias
+
+ # Generate Wfdb matrix
+ def _generate_wfdb(self, w_fdb):
+ """
+ Generate Wfdb matrix
+ :return:
+ """
+ # Initialize feedbacks weight matrix
+ if w_fdb is None:
+ if self.wfdb_sparsity is None:
+ w_fdb = self.input_scaling * (
+ np.random.randint(0, 2, (self.output_dim, self.feedbacks_dim)) * 2.0 - 1.0)
+ w_fdb = torch.from_numpy(w_fdb.astype(np.float32))
+ else:
+ w_fdb = self.input_scaling * np.random.choice(np.append([0], self.input_set),
+ (self.output_dim, self.feedbacks_dim),
+ p=np.append([1.0 - self.wfdb_sparsity],
+ [self.wfdb_sparsity / len(
+ self.input_set)] * len(
+ self.input_set)))
+ w_fdb = torch.from_numpy(w_fdb.astype(np.float32))
+ # end if
+ else:
+ if callable(w_fdb):
+ w_fdb = w_fdb(self.output_dim, self.feedbacks_dim)
+ # end if
+ # end if
+
+ return Variable(w_fdb, requires_grad=False)
+ # end _generate_wfdb
+
+ ############################################
+ # STATIC
+ ############################################
+
+ # Generate W matrix
+ @staticmethod
+ def generate_w(output_dim, w_sparsity=None):
+ """
+ Generate W matrix
+ :param output_dim:
+ :param w_sparsity:
+ :return:
+ """
+ # Sparsity
+ if w_sparsity is None:
+ w = torch.rand(output_dim, output_dim) * 2.0 - 1.0
+ else:
+ w = np.random.choice([0.0, 1.0], (output_dim, output_dim),
+ p=[1.0 - w_sparsity, w_sparsity])
+ w[w == 1] = np.random.rand(len(w[w == 1])) * 2.0 - 1.0
+ w = torch.from_numpy(w.astype(np.float32))
+
+ # Return
+ return w
+ # end if
+ return w
+ # end generate_w
+
+ # To sparse matrix
+ @staticmethod
+ def to_sparse(m):
+ """
+ To sparse matrix
+ :param m:
+ :return:
+ """
+ # Rows, columns and values
+ rows = torch.LongTensor()
+ columns = torch.LongTensor()
+ values = torch.FloatTensor()
+
+ # For each row
+ for i in range(m.shape[0]):
+ # For each column
+ for j in range(m.shape[1]):
+ if m[i, j] != 0.0:
+ rows = torch.cat((rows, torch.LongTensor([i])), dim=0)
+ columns = torch.cat((columns, torch.LongTensor([j])), dim=0)
+ values = torch.cat((values, torch.FloatTensor([m[i, j]])), dim=0)
+ # end if
+ # end for
+ # end for
+
+ # Indices
+ indices = torch.cat((rows.unsqueeze(0), columns.unsqueeze(0)), dim=0)
+
+ # To sparse
+ return torch.sparse.FloatTensor(indices, values)
+ # end to_sparse
+
+# end ESNCell
diff --git a/ESN/EchoTorch-master/echotorch/nn/GatedESN.py b/ESN/EchoTorch-master/echotorch/nn/GatedESN.py
new file mode 100644
index 0000000..e944b14
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/nn/GatedESN.py
@@ -0,0 +1,301 @@
+# -*- coding: utf-8 -*-
+#
+# File : echotorch/nn/ESN.py
+# Description : An Echo State Network module.
+# Date : 26th of January, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti, University of Neuchâtel
+
+"""
+Created on 26 January 2018
+@author: Nils Schaetti
+"""
+
+# Imports
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+from .LiESNCell import LiESNCell
+from sklearn.decomposition import IncrementalPCA
+from .PCACell import PCACell
+import matplotlib.pyplot as plt
+from torch.autograd import Variable
+
+
+# Gated Echo State Network
+class GatedESN(nn.Module):
+ """
+ Gated Echo State Network
+ """
+
+ # Constructor
+ def __init__(self, input_dim, reservoir_dim, pca_dim, hidden_dim, leaky_rate=1.0, spectral_radius=0.9,
+ bias_scaling=0, input_scaling=1.0, w=None, w_in=None, w_bias=None, sparsity=None,
+ input_set=[1.0, -1.0], w_sparsity=None, nonlin_func=torch.tanh,
+ create_cell=True):
+ """
+ Constructor
+ :param input_dim: Inputs dimension.
+ :param hidden_dim: Hidden layer dimension
+ :param reservoir_dim: Reservoir size
+ :param spectral_radius: Reservoir's spectral radius
+ :param bias_scaling: Scaling of the bias, a constant input to each neuron (default: 0, no bias)
+ :param input_scaling: Scaling of the input weight matrix, default 1.
+ :param w: Internal weights matrix
+ :param w_in: Input-reservoir weights matrix
+ :param w_bias: Bias weights matrix
+ :param sparsity:
+ :param input_set:
+ :param w_sparsity:
+ :param nonlin_func: Reservoir's activation function (tanh, sig, relu)
+ :param learning_algo: Which learning algorithm to use (inv, LU, grad)
+ """
+ super(GatedESN, self).__init__()
+
+ # Properties
+ self.reservoir_dim = reservoir_dim
+ self.pca_dim = pca_dim
+ self.hidden_dim = hidden_dim
+ self.finalized = False
+
+ # Recurrent layer
+ if create_cell:
+ self.esn_cell = LiESNCell(
+ input_dim=input_dim, output_dim=reservoir_dim, spectral_radius=spectral_radius, bias_scaling=bias_scaling,
+ input_scaling=input_scaling, w=w, w_in=w_in, w_bias=w_bias, sparsity=sparsity, input_set=input_set,
+ w_sparsity=w_sparsity, nonlin_func=nonlin_func, leaky_rate=leaky_rate
+ )
+ # end if
+
+ # PCA
+ if self.pca_dim > 0:
+ self.pca_cell = PCACell(input_dim=reservoir_dim, output_dim=pca_dim)
+ # end if
+
+ # Initialize input update weights
+ self.register_parameter('wzp', nn.Parameter(self.init_wzp()))
+
+ # Initialize hidden update weights
+ self.register_parameter('wzh', nn.Parameter(self.init_wzh()))
+
+ # Initialize update bias
+ self.register_parameter('bz', nn.Parameter(self.init_bz()))
+ # end __init__
+
+ ###############################################
+ # PROPERTIES
+ ###############################################
+
+ # Hidden layer
+ @property
+ def hidden(self):
+ """
+ Hidden layer
+ :return:
+ """
+ return self.esn_cell.hidden
+ # end hidden
+
+ # Hidden weight matrix
+ @property
+ def w(self):
+ """
+ Hidden weight matrix
+ :return:
+ """
+ return self.esn_cell.w
+ # end w
+
+ # Input matrix
+ @property
+ def w_in(self):
+ """
+ Input matrix
+ :return:
+ """
+ return self.esn_cell.w_in
+ # end w_in
+
+ ###############################################
+ # PUBLIC
+ ###############################################
+
+ # Init hidden vector
+ def init_hidden(self):
+ """
+ Init hidden layer
+ :return: Initiated hidden layer
+ """
+ return Variable(torch.zeros(self.hidden_dim), requires_grad=False)
+ # end init_hidden
+
+ # Init update vector
+ def init_update(self):
+ """
+ Init hidden layer
+ :return: Initiated hidden layer
+ """
+ return self.init_hidden()
+ # end init_hidden
+
+ # Init update-reduced matrix
+ def init_wzp(self):
+ """
+ Init update-reduced matrix
+ :return: Initiated update-reduced matrix
+ """
+ return torch.rand(self.pca_dim, self.hidden_dim)
+ # end init_hidden
+
+ # Init update-hidden matrix
+ def init_wzh(self):
+ """
+ Init update-hidden matrix
+ :return: Initiated update-hidden matrix
+ """
+ return torch.rand(self.pca_dim, self.hidden_dim)
+ # end init_hidden
+
+ # Init update bias
+ def init_bz(self):
+ """
+ Init update bias
+ :return:
+ """
+ return torch.rand(self.hidden_dim)
+ # end init_bz
+
+ # Reset learning
+ def reset(self):
+ """
+ Reset learning
+ :return:
+ """
+ # Reset PCA layer
+ self.pca_cell.reset()
+
+ # Reset reservoir
+ self.reset_reservoir()
+
+ # Training mode again
+ self.train(True)
+ # end reset
+
+ # Forward
+ def forward(self, u, y=None):
+ """
+ Forward
+ :param u: Input signal.
+ :return: Output or hidden states
+ """
+ # Time length
+ time_length = int(u.size()[1])
+
+ # Number of batches
+ n_batches = int(u.size()[0])
+
+ # Compute reservoir states
+ reservoir_states = self.esn_cell(u)
+ reservoir_states.required_grad = False
+
+ # Reduce
+ if self.pca_dim > 0:
+ # Reduce states
+ pca_states = self.pca_cell(reservoir_states)
+ pca_states.required_grad = False
+
+ # Stop here if we learn PCA
+ if self.finalized:
+ return
+ # end if
+
+ # Hidden states
+ hidden_states = Variable(torch.zeros(n_batches, time_length, self.hidden_dim))
+ hidden_states = hidden_states.cuda() if pca_states.is_cuda else hidden_states
+ else:
+ # Hidden states
+ hidden_states = Variable(torch.zeros(n_batches, time_length, self.hidden_dim))
+ hidden_states = hidden_states.cuda() if reservoir_states.is_cuda else hidden_states
+ # end if
+
+ # For each batch
+ for b in range(n_batches):
+ # Reset hidden layer
+ hidden = self.init_hidden()
+
+ # TO CUDA
+ if u.is_cuda:
+ hidden = hidden.cuda()
+ # end if
+
+ # For each steps
+ for t in range(time_length):
+ # Current reduced state
+ if self.pca_dim > 0:
+ pt = pca_states[b, t]
+ else:
+ pt = reservoir_states[b, t]
+ # end if
+
+ # Compute update vector
+ zt = F.sigmoid(self.wzp.mv(pt) + self.wzh.mv(hidden) + self.bz)
+
+ # Compute hidden state
+ ht = (1.0 - zt) * hidden + zt * pt
+
+ # Add to outputs
+ hidden = ht.view(self.hidden_dim)
+
+ # New last state
+ hidden_states[b, t] = hidden
+ # end for
+ # end for
+
+ # Return hidden states
+ return hidden_states
+ # end forward
+
+ # Finish training
+ def finalize(self):
+ """
+ Finalize training with LU factorization
+ """
+ # Finalize output training
+ self.pca_cell.finalize()
+
+ # Finalized
+ self.finalized = True
+ # end finalize
+
+ # Reset reservoir layer
+ def reset_reservoir(self):
+ """
+ Reset hidden layer
+ :return:
+ """
+ self.esn_cell.reset_hidden()
+ # end reset_reservoir
+
+ # Reset hidden layer
+ def reset_hidden(self):
+ """
+ Reset hidden layer
+ :return:
+ """
+ self.hidden.fill_(0.0)
+ # end reset_hidden
+
+# end GatedESN
diff --git a/ESN/EchoTorch-master/echotorch/nn/HESN.py b/ESN/EchoTorch-master/echotorch/nn/HESN.py
new file mode 100644
index 0000000..4584899
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/nn/HESN.py
@@ -0,0 +1,103 @@
+# -*- coding: utf-8 -*-
+#
+# File : echotorch/nn/HESN.py
+# Description : ESN with input pre-trained and used with transfer learning.
+# Date : 22 March, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti
+
+import torch
+import torch.sparse
+import torch.nn as nn
+from .LiESN import LiESN
+
+
+# ESN with input pre-trained and used with transfer learning
+class HESN(object):
+ """
+ ESN with input pre-trained and used with transfer learning
+ """
+
+ # Constructor
+ def __init__(self, model, input_dim, hidden_dim, output_dim, spectral_radius=0.9,
+ bias_scaling=0, input_scaling=1.0, w=None, w_in=None, w_bias=None, sparsity=None,
+ input_set=[1.0, -1.0], w_sparsity=None, nonlin_func=torch.tanh, learning_algo='inv', ridge_param=0.0,
+ leaky_rate=1.0, train_leaky_rate=False, feedbacks=False, wfdb_sparsity=None,
+ normalize_feedbacks=False):
+ # Embedding layer
+ self.mode = model
+
+ # Li-ESN
+ self.esn = LiESN(input_dim, hidden_dim, output_dim, spectral_radius, bias_scaling, input_scaling,
+ w, w_in, w_bias, sparsity, input_set, w_sparsity, nonlin_func, learning_algo, ridge_param,
+ leaky_rate, train_leaky_rate, feedbacks, wfdb_sparsity, normalize_feedbacks)
+ # end __init__
+
+ ###############################################
+ # PROPERTIES
+ ###############################################
+
+ # Hidden layer
+ @property
+ def hidden(self):
+ """
+ Hidden layer
+ :return:
+ """
+ return self.esn.hidden
+
+ # end hidden
+
+ # Hidden weight matrix
+ @property
+ def w(self):
+ """
+ Hidden weight matrix
+ :return:
+ """
+ return self.esn.w
+
+ # end w
+
+ # Input matrix
+ @property
+ def w_in(self):
+ """
+ Input matrix
+ :return:
+ """
+ return self.esn.w_in
+ # end w_in
+
+ ###############################################
+ # PUBLIC
+ ###############################################
+
+ # Forward
+ def forward(self, u, y=None):
+ """
+ Forward
+ :param x:
+ :return:
+ """
+ # Selected features
+ selected_features = self.model(u)
+
+ # ESN
+ return self.esn(selected_features, y)
+ # end forward
+
+# end HESN
diff --git a/ESN/EchoTorch-master/echotorch/nn/ICACell.py b/ESN/EchoTorch-master/echotorch/nn/ICACell.py
new file mode 100644
index 0000000..b1f7370
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/nn/ICACell.py
@@ -0,0 +1,112 @@
+# -*- coding: utf-8 -*-
+#
+# File : echotorch/nn/ESN.py
+# Description : An Echo State Network module.
+# Date : 26th of January, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti
+
+"""
+Created on 26 January 2018
+@author: Nils Schaetti
+"""
+
+# Imports
+import torch.sparse
+import torch
+import torch.nn as nn
+from torch.autograd import Variable
+
+
+# Independent Component Analysis layer
+class ICACell(nn.Module):
+ """
+ Principal Component Analysis layer. It can be used to handle different batch-mode algorithm for ICA.
+ """
+
+ # Constructor
+ def __init__(self, input_dim, output_dim):
+ """
+ Constructor
+ :param input_dim: Inputs dimension.
+ :param output_dim: Reservoir size
+ """
+ super(ICACell, self).__init__()
+ pass
+ # end __init__
+
+ ###############################################
+ # PROPERTIES
+ ###############################################
+
+ ###############################################
+ # PUBLIC
+ ###############################################
+
+ # Reset learning
+ def reset(self):
+ """
+ Reset learning
+ :return:
+ """
+ # Training mode again
+ self.train(True)
+ # end reset
+
+ # Forward
+ def forward(self, x, y=None):
+ """
+ Forward
+ :param x: Input signal.
+ :param y: Target outputs
+ :return: Output or hidden states
+ """
+ # Batch size
+ batch_size = x.size()[0]
+
+ # Time length
+ time_length = x.size()[1]
+
+ # Add bias
+ if self.with_bias:
+ x = self._add_constant(x)
+ # end if
+ # end forward
+
+ # Finish training
+ def finalize(self):
+ """
+ Finalize training with LU factorization or Pseudo-inverse
+ """
+ pass
+ # end finalize
+
+ ###############################################
+ # PRIVATE
+ ###############################################
+
+ # Add constant
+ def _add_constant(self, x):
+ """
+ Add constant
+ :param x:
+ :return:
+ """
+ bias = Variable(torch.ones((x.size()[0], x.size()[1], 1)), requires_grad=False)
+ return torch.cat((bias, x), dim=2)
+ # end _add_constant
+
+# end ICACell
diff --git a/ESN/EchoTorch-master/echotorch/nn/Identity.py b/ESN/EchoTorch-master/echotorch/nn/Identity.py
new file mode 100644
index 0000000..ff41e76
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/nn/Identity.py
@@ -0,0 +1,43 @@
+# -*- coding: utf-8 -*-
+#
+# File : echotorch/nn/Identity.py
+# Description : An Leaky-Integrated Echo State Network layer.
+# Date : 09th of April, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti
+
+
+import torch
+import torch.nn as nn
+from torch.autograd import Variable
+
+
+# Identity layer
+class Identity(nn.Module):
+ """
+ Identity layer
+ """
+
+ # Forward
+ def forward(self, x):
+ """
+ Forward
+ :return:
+ """
+ return x
+ # end forward
+
+# end Identity
diff --git a/ESN/EchoTorch-master/echotorch/nn/LiESN.py b/ESN/EchoTorch-master/echotorch/nn/LiESN.py
new file mode 100644
index 0000000..8530d04
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/nn/LiESN.py
@@ -0,0 +1,93 @@
+# -*- coding: utf-8 -*-
+#
+# File : echotorch/nn/ESN.py
+# Description : An Echo State Network module.
+# Date : 26th of January, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti
+
+"""
+Created on 26 January 2018
+@author: Nils Schaetti
+"""
+
+import torch
+from .LiESNCell import LiESNCell
+from .ESN import ESN
+
+
+# Leaky-Integrated Echo State Network module
+class LiESN(ESN):
+ """
+ Leaky-Integrated Echo State Network module
+ """
+
+ # Constructor
+ def __init__(self, input_dim, hidden_dim, output_dim, spectral_radius=0.9,
+ bias_scaling=0, input_scaling=1.0, w=None, w_in=None, w_bias=None, sparsity=None,
+ input_set=[1.0, -1.0], w_sparsity=None, nonlin_func=torch.tanh, learning_algo='inv', ridge_param=0.0,
+ leaky_rate=1.0, train_leaky_rate=False, feedbacks=False, wfdb_sparsity=None,
+ normalize_feedbacks=False):
+ """
+ Constructor
+ :param input_dim:
+ :param hidden_dim:
+ :param output_dim:
+ :param spectral_radius:
+ :param bias_scaling:
+ :param input_scaling:
+ :param w:
+ :param w_in:
+ :param w_bias:
+ :param sparsity:
+ :param input_set:
+ :param w_sparsity:
+ :param nonlin_func:
+ :param learning_algo:
+ :param ridge_param:
+ :param leaky_rate:
+ :param train_leaky_rate:
+ :param feedbacks:
+ """
+ super(LiESN, self).__init__(input_dim, hidden_dim, output_dim, spectral_radius=spectral_radius,
+ bias_scaling=bias_scaling, input_scaling=input_scaling,
+ w=w, w_in=w_in, w_bias=w_bias, sparsity=sparsity, input_set=input_set,
+ w_sparsity=w_sparsity, nonlin_func=nonlin_func, learning_algo=learning_algo,
+ ridge_param=ridge_param, create_cell=False, feedbacks=feedbacks,
+ wfdb_sparsity=wfdb_sparsity, normalize_feedbacks=normalize_feedbacks)
+
+ # Recurrent layer
+ self.esn_cell = LiESNCell(leaky_rate, train_leaky_rate, input_dim, hidden_dim, spectral_radius=spectral_radius,
+ bias_scaling=bias_scaling, input_scaling=input_scaling,
+ w=w, w_in=w_in, w_bias=w_bias, sparsity=sparsity, input_set=input_set,
+ w_sparsity=w_sparsity, nonlin_func=nonlin_func, feedbacks=feedbacks,
+ feedbacks_dim=output_dim, wfdb_sparsity=wfdb_sparsity,
+ normalize_feedbacks=normalize_feedbacks)
+ # end __init__
+
+ ###############################################
+ # PROPERTIES
+ ###############################################
+
+ ###############################################
+ # PUBLIC
+ ###############################################
+
+ ###############################################
+ # PRIVATE
+ ###############################################
+
+# end ESNCell
diff --git a/ESN/EchoTorch-master/echotorch/nn/LiESNCell.py b/ESN/EchoTorch-master/echotorch/nn/LiESNCell.py
new file mode 100644
index 0000000..2644396
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/nn/LiESNCell.py
@@ -0,0 +1,150 @@
+# -*- coding: utf-8 -*-
+#
+# File : echotorch/nn/LiESNCell.py
+# Description : An Leaky-Integrated Echo State Network layer.
+# Date : 26th of January, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti
+
+"""
+Created on 26 January 2018
+@author: Nils Schaetti
+"""
+
+import torch
+import torch.sparse
+import torch.nn as nn
+from torch.autograd import Variable
+from .ESNCell import ESNCell
+import matplotlib.pyplot as plt
+
+
+# Leak-Integrated Echo State Network layer
+class LiESNCell(ESNCell):
+ """
+ Leaky-Integrated Echo State Network layer
+ """
+
+ # Constructor
+ def __init__(self, leaky_rate=1.0, train_leaky_rate=False, *args, **kwargs):
+ """
+ Constructor
+ :param leaky_rate: Reservoir's leaky rate (default 1.0, normal ESN)
+ :param train_leaky_rate: Train leaky rate as parameter? (default: False)
+ """
+ super(LiESNCell, self).__init__(*args, **kwargs)
+
+ # Params
+ if train_leaky_rate:
+ self.leaky_rate = nn.Parameter(torch.Tensor(1).fill_(leaky_rate), requires_grad=True)
+ else:
+ # Initialize bias
+ self.register_buffer('leaky_rate', Variable(torch.Tensor(1).fill_(leaky_rate), requires_grad=False))
+ # end if
+ # end __init__
+
+ ###############################################
+ # PUBLIC
+ ###############################################
+
+ # Forward
+ def forward(self, u, y=None, w_out=None):
+ """
+ Forward
+ :param u: Input signal.
+ :return: Resulting hidden states.
+ """
+ # Time length
+ time_length = int(u.size()[1])
+
+ # Number of batches
+ n_batches = int(u.size()[0])
+
+ # Outputs
+ outputs = Variable(torch.zeros(n_batches, time_length, self.output_dim))
+ outputs = outputs.cuda() if self.hidden.is_cuda else outputs
+
+ # For each batch
+ for b in range(n_batches):
+ # Reset hidden layer
+ self.reset_hidden()
+
+ # For each steps
+ for t in range(time_length):
+ # Current input
+ ut = u[b, t]
+
+ # Compute input layer
+ u_win = self.w_in.mv(ut)
+
+ # Apply W to x
+ x_w = self.w.mv(self.hidden)
+
+ # Feedback or not
+ if self.feedbacks and self.training and y is not None:
+ # Current target
+ yt = y[b, t]
+
+ # Compute feedback layer
+ y_wfdb = self.w_fdb.mv(yt)
+
+ # Add everything
+ x = u_win + x_w + y_wfdb + self.w_bias
+ # x = u_win + x_w + self.w_bias
+ elif self.feedbacks and not self.training and w_out is not None:
+ # Add bias
+ bias_hidden = torch.cat((Variable(torch.ones(1)), self.hidden), dim=0)
+
+ # Compute past output
+ yt = w_out.t().mv(bias_hidden)
+
+ # Normalize
+ if self.normalize_feedbacks:
+ yt -= torch.min(yt)
+ yt /= torch.max(yt) - torch.min(yt)
+ yt /= torch.sum(yt)
+ # end if
+
+ # Compute feedback layer
+ y_wfdb = self.w_fdb.mv(yt)
+
+ # Add everything
+ x = u_win + x_w + y_wfdb + self.w_bias
+ # x = u_win + x_w + self.w_bias
+ else:
+ # Add everything
+ x = u_win + x_w + self.w_bias
+ # end if
+
+ # Apply activation function
+ x = self.nonlin_func(x)
+
+ # Add to outputs
+ self.hidden.data = (self.hidden.mul(1.0 - self.leaky_rate) + x.view(self.output_dim).mul(self.leaky_rate)).data
+
+ # New last state
+ outputs[b, t] = self.hidden
+ # end for
+ # end for
+
+ return outputs
+ # end forward
+
+ ###############################################
+ # PRIVATE
+ ###############################################
+
+# end LiESNCell
diff --git a/ESN/EchoTorch-master/echotorch/nn/OnlinePCACell.py b/ESN/EchoTorch-master/echotorch/nn/OnlinePCACell.py
new file mode 100644
index 0000000..b3e1fd3
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/nn/OnlinePCACell.py
@@ -0,0 +1,321 @@
+# -*- coding: utf-8 -*-
+#
+# File : echotorch/nn/ESN.py
+# Description : An Echo State Network module.
+# Date : 26th of January, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti
+
+"""
+Created on 26 January 2018
+@author: Nils Schaetti
+"""
+
+# Imports
+import torch.sparse
+import torch
+import torch.nn as nn
+from torch.autograd import Variable
+
+
+# Online PCA cell
+# We extract the principal components from the input data incrementally.
+class OnlinePCACell(nn.Module):
+ """
+ Online PCA cell
+ We extract the principal components from the input data incrementally.
+ Weng J., Zhang Y. and Hwang W.,
+ Candid covariance-free incremental principal component analysis,
+ IEEE Trans. Pattern Analysis and Machine Intelligence,
+ vol. 25, 1034--1040, 2003.
+ """
+
+ # Constructor
+ def __init__(self, input_dim, output_dim, amn_params=(20, 200, 2000, 3), init_eigen_vectors=None, var_rel=1, numx_rng=None):
+ """
+ Constructor
+ :param input_dim:
+ :param output_dim:
+ :param amn_params:
+ :param init_eigen_vectors:
+ :param var_rel:
+ :param numx_rng:
+ """
+ # Super call
+ super(OnlinePCACell, self).__init__()
+
+ # Properties
+ self.input_dim = input_dim
+ self.output_dim = output_dim
+ self.amn_params = amn_params
+ self._init_v = init_eigen_vectors
+ self.var_rel = var_rel
+ self._train_iteration = 0
+ self._training_type = None
+
+ # (Internal) eigenvectors
+ self._v = None
+ self.v = None
+ self.d = None
+
+ # Total and reduced
+ self._var_tot = 1.0
+ self._reduced_dims = self.output_dim
+ # end __init__
+
+ ###############################################
+ # PROPERTIES
+ ###############################################
+
+ # Initial eigen vectors
+ @property
+ def init_eigen_vectors(self):
+ """
+ Initial eigen vectors
+ :return:
+ """
+ return self._init_v
+ # end init_eigen_vectors
+
+ # Set initial eigen vectors
+ @init_eigen_vectors.setter
+ def init_eigen_vectors(self, init_eigen_vectors=None):
+ """
+ Set initial eigen vectors
+ :param init_eigen_vectors:
+ :return:
+ """
+ self._init_v = init_eigen_vectors
+
+ # Set input dim
+ if self._input_dim is None:
+ self._input_dim = self._init_v.shape[0]
+ else:
+ # Check input dim
+ assert(
+ self.input_dim == self._init_v.shape[0]), \
+ Exception(u"Dimension mismatch. init_eigen_vectors shape[0] must be {}, given {}".format(
+ self.input_dim,
+ self._init_v.shape[0]
+ )
+ )
+ # end if
+
+ # Set output dim
+ if self._output_dim is None:
+ self._output_dim = self._init_v.shape[1]
+ else:
+ # Check output dim
+ assert(
+ self.output_dim == self._init_v.shape[1],
+ Exception(u"Dimension mismatch, init_eigen_vectors shape[1] must be {}, given {}".format(
+ self.output_dim,
+ self._init_v.shape[1])
+ )
+ )
+ # end if
+
+ # Set V
+ if self.v is None:
+ self._v = self._init_v.copy()
+ self.d = torch.norm(self._v, p=2, dim=0)
+ self.v = self._v / self.d
+ # end if
+ # end init_eigen_vectors
+
+ ###############################################
+ # PUBLIC
+ ###############################################
+
+ # Get variance explained by PCA
+ def get_var_tot(self):
+ """
+ Get variance explained by PCA
+ :return:
+ """
+ return self._var_tot
+ # end get_var_tot
+
+ # Get reducible dimensionality based on the set thresholds
+ def get_reduced_dimensionality(self):
+ """
+ Return reducible dimensionality based on the set thresholds.
+ :return:
+ """
+ return self._reduced_dims
+ # end get_reduced_dimensionality
+
+ # Get projection matrix
+ def get_projmatrix(self, transposed=1):
+ """
+ Get projection matrix
+ :param transposed:
+ :return:
+ """
+ if transposed:
+ return self.v
+ # end if
+ return self.v.t()
+ # end get_projmatrix
+
+ # Get back-projection matrix (reconstruction matrix)
+ def get_recmatrix(self, transposed=1):
+ """
+ Get reconstruction matrix
+ :param transposed:
+ :return:
+ """
+ if transposed:
+ return self.v.t()
+ # end if
+ return self.v
+ # end get_recmatrix
+
+ # Reset learning
+ def reset(self):
+ """
+ Reset learning
+ :return:
+ """
+ # Training mode again
+ self.train(True)
+ # end reset
+
+ # Forward
+ def forward(self, x, y=None):
+ """
+ Forward
+ :param x: Input signal.
+ :param y: Target outputs
+ :return: Output or hidden states
+ """
+ # Update components
+ self._update_pca(x)
+
+ # Execute
+ return self._execute(x)
+ # end forward
+
+ ###############################################
+ # PRIVATE
+ ###############################################
+
+ # Project the input on the first 'n' components
+ def _execute(self, x, n=None):
+ """
+ Project the input on the first 'n' components
+ :param x:
+ :param n:
+ :return:
+ """
+ if n is not None:
+ return x.mm(self.v[:, :n])
+ # end if
+ return x.mm(self.v)
+ # end _execute
+
+ # Update the principal components.
+ def _update_pca(self, x):
+ """
+ Update the principal components
+ :param x:
+ :return:
+ """
+ # Params
+ [w1, w2] = self._amnesic(self.get_current_train_iteration() + 1)
+ red_j = self.output_dim
+ red_j_flag = False
+ explained_var = 0.0
+
+ # For each output
+ r = x
+ for j in range(self.output_dim):
+ v = self._v[:, j:j + 1]
+ d = self.d[j]
+
+ v = w1 * v + w2 * r.mv(v) / d * r.t()
+ d = torch.norm(v)
+ vn = v / d
+ r = r - r.mv(vn) * vn.t()
+ explained_var += d
+
+ # Red flag
+ if not red_j_flag:
+ ratio = explained_var / self._var_tot
+ if ratio > self.var_rel:
+ red_j = j
+ red_j_flag = True
+ # end if
+ # end if
+
+ self._v[:, j:j + 1] = v
+ self.v[:, j:j + 1] = vn
+ self.d[j] = d
+ # end for
+
+ self._var_tot = explained_var
+ self._reduced_dims = red_j
+ # end update_pca
+
+ # Initialize parameters
+ def _check_params(self, *args):
+ """
+ Initialize parameters
+ :param args:
+ :return:
+ """
+ if self._init_v is None:
+ if self.output_dim is not None:
+ self.init_eigen_vectors = 0.1 * torch.randn(self.input_dim, self.output_dim)
+ else:
+ self.init_eigen_vectors = 0.1 * torch.randn(self.input_dim, self.input_dim)
+ # end if
+ # end if
+ # end _check_params
+
+ # Return amnesic weights
+ def _amnesic(self, n):
+ """
+ Return amnesic weights
+ :param n:
+ :return:
+ """
+ _i = float(n + 1)
+ n1, n2, m, c = self.amn_params
+ if _i < n1:
+ l = 0
+ elif (_i >= n1) and (_i < n2):
+ l = c * (_i - n1) / (n2 - n1)
+ else:
+ l = c + (_i - n2) / m
+ # end if
+ _world = float(_i - 1 - l) / _i
+ _wnew = float(1 + l) / _i
+ return [_world, _wnew]
+ # end _amnesic
+
+ # Add constant
+ def _add_constant(self, x):
+ """
+ Add constant
+ :param x:
+ :return:
+ """
+ bias = Variable(torch.ones((x.size()[0], x.size()[1], 1)), requires_grad=False)
+ return torch.cat((bias, x), dim=2)
+ # end _add_constant
+
+# end PCACell
diff --git a/ESN/EchoTorch-master/echotorch/nn/PCACell.py b/ESN/EchoTorch-master/echotorch/nn/PCACell.py
new file mode 100644
index 0000000..a25d5f9
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/nn/PCACell.py
@@ -0,0 +1,373 @@
+# -*- coding: utf-8 -*-
+#
+# File : echotorch/nn/ESN.py
+# Description : An Echo State Network module.
+# Date : 26th of January, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti
+
+"""
+Created on 26 January 2018
+@author: Nils Schaetti
+"""
+
+# Imports
+import torch.sparse
+import torch
+import torch.nn as nn
+from torch.autograd import Variable
+
+
+# Filter the input data through the most significatives principal components.
+class PCACell(nn.Module):
+ """
+ Filter the input data through the most significatives principal components
+ """
+
+ # Constructor
+ def __init__(self, input_dim, output_dim, svd=False, reduce=False, var_rel=1E-12, var_abs=1E-15, var_part=None):
+ """
+ Constructor
+ :param input_dim:
+ :param output_dim:
+ :param svd: If True use Singular Value Decomposition instead of the standard eigenvalue problem solver. Use it when PCANode complains about singular covariance matrices.
+ :param reduce: Keep only those principal components which have a variance larger than 'var_abs'
+ :param val_rel: Variance relative to first principal component threshold. Default is 1E-12.
+ :param var_abs: Absolute variance threshold. Default is 1E-15.
+ :param var_part: Variance relative to total variance threshold. Default is None.
+ """
+ # Super
+ super(PCACell, self).__init__()
+
+ # Properties
+ self.input_dim = input_dim
+ self.output_dim = output_dim
+ self.svd = svd
+ self.var_abs = var_abs
+ self.var_rel = var_rel
+ self.var_part = var_part
+ self.reduce = reduce
+
+ # Set it as buffer
+ self.register_buffer('xTx', Variable(torch.zeros(input_dim, input_dim), requires_grad=False))
+ self.register_buffer('xTx_avg', Variable(torch.zeros(input_dim), requires_grad=False))
+
+ # Eigen values
+ self.d = None
+
+ # Eigen vectors, first index for coordinates
+ self.v = None
+
+ # Total variance
+ self.total_variance = None
+
+ # Len, average and explained variance
+ self.tlen = 0
+ self.avg = None
+ self.explained_variance = None
+ # end __init__
+
+ ###############################################
+ # PROPERTIES
+ ###############################################
+
+ ###############################################
+ # PUBLIC
+ ###############################################
+
+ # Reset learning
+ def reset(self):
+ """
+ Reset learning
+ :return:
+ """
+ # Initialize the covariance matrix one for
+ # the input data.
+ self._init_internals()
+
+ # Training mode again
+ self.train(True)
+ # end reset
+
+ # Forward
+ def forward(self, x, y=None):
+ """
+ Forward
+ :param x: Input signal.
+ :param y: Target outputs
+ :return: Output or hidden states
+ """
+ # Number of batches
+ n_batches = int(x.size()[0])
+
+ # Time length
+ time_length = x.size()[1]
+
+ # Outputs
+ outputs = Variable(torch.zeros(n_batches, time_length, self.output_dim))
+ outputs = outputs.cuda() if x.is_cuda else outputs
+
+ # For each batch
+ for b in range(n_batches):
+ # Sample
+ s = x[b]
+
+ # Train or execute
+ if self.training:
+ self._update_cov_matrix(s)
+ else:
+ outputs[b] = self._execute_pca(s)
+ # end if
+ # end for
+
+ return outputs
+ # end forward
+
+ # Finish training
+ def finalize(self):
+ """
+ Finalize training with LU factorization or Pseudo-inverse
+ """
+ # Reshape average
+ xTx, avg, tlen = self._fix(self.xTx, self.xTx_avg, self.tlen)
+
+ # Reshape
+ self.avg = avg.unsqueeze(0)
+
+ # We need more observations than variables
+ if self.tlen < self.input_dim:
+ raise Exception(u"The number of observations ({}) is larger than the number of input variables ({})".format(self.tlen, self.input_dim))
+ # end if
+
+ # Total variance
+ total_var = torch.diag(xTx).sum()
+
+ # Compute and sort eigenvalues
+ d, v = torch.symeig(xTx, eigenvectors=True)
+
+ # Check for negative eigenvalues
+ if float(d.min()) < 0:
+ raise Exception(u"Got negative eigenvalues ({}). You may either set output_dim to be smaller".format(d))
+ # end if
+
+ # Indexes
+ indexes = range(d.size(0)-1, -1, -1)
+
+ # Sort by descending order
+ d = torch.take(d, Variable(torch.LongTensor(indexes)))
+ v = v[:, indexes]
+
+ # Explained covariance
+ self.explained_variance = torch.sum(d) / total_var
+
+ # Store eigenvalues
+ self.d = d[:self.output_dim]
+
+ # Store eigenvectors
+ self.v = v[:, :self.output_dim]
+
+ # Total variance
+ self.total_variance = total_var
+
+ # Stop training
+ self.train(False)
+ # end finalize
+
+ # Get explained variance
+ def get_explained_variance(self):
+ """
+ The explained variance is the fraction of the original variance that can be explained by the
+ principal components.
+ :return:
+ """
+ return self.explained_variance
+ # end get_explained_variance
+
+ # Get the projection matrix
+ def get_proj_matrix(self, tranposed=True):
+ """
+ Get the projection matrix
+ :param tranposed:
+ :return:
+ """
+ # Stop training
+ self.train(False)
+
+ # Transposed
+ if tranposed:
+ return self.v
+ # end if
+ return self.v.t()
+ # end get_proj_matrix
+
+ # Get the reconstruction matrix
+ def get_rec_matrix(self, tranposed=1):
+ """
+ Returns the reconstruction matrix
+ :param tranposed:
+ :return:
+ """
+ # Stop training
+ self.train(False)
+
+ # Transposed
+ if tranposed:
+ return self.v.t()
+ # end if
+ return self.v
+ # end get_rec_matrix
+
+ ###############################################
+ # PRIVATE
+ ###############################################
+
+ # Project the input on the first 'n' principal components
+ def _execute_pca(self, x, n=None):
+ """
+ Project the input on the first 'n' principal components
+ :param x:
+ :param n:
+ :return:
+ """
+ if n is not None:
+ return (x - self.avg).mm(self.v[:, :n])
+ # end if
+ return (x - self.avg).mm(self.v)
+ # end _execute
+
+ # Project data from the output to the input space using the first 'n' components.
+ def _inverse(self, y, n=None):
+ """
+ Project data from the output to the input space using the first 'n' components.
+ :param y:
+ :param n:
+ :return:
+ """
+ if n is None:
+ n = y.shape[1]
+ # end if
+
+ if n > self.output_dim:
+ raise Exception(u"y has dimension {} but should but at most {}".format(n, self.output_dim))
+ # end if
+
+ # Get reconstruction matrix
+ v = self.get_rec_matrix()
+
+ # Reconstruct
+ if n is not None:
+ return y.mm(v[:n, :]) + self.avg
+ else:
+ return y.mm(v) + self.avg
+ # end if
+ # end _inverse
+
+ # Adjust output dim
+ def _adjust_output_dim(self):
+ """
+ If the output dimensions is small than the input dimension
+ :return:
+ """
+ # If the number of PC is not specified, keep all
+ if self.desired_variance is None and self.ouput_dim is None:
+ self.output_dim = self.input_dim
+ return None
+ # end if
+
+ # Define the range of eigenvalues to compute if the number of PC to keep
+ # has been specified directly.
+ if self.output_dim is not None and self.output_dim >= 1:
+ return (self.input_dim - self.output_dim + 1, self.input_dim)
+ else:
+ return None
+ # end if
+ # end _adjust_output_dim
+
+ # Fix covariance matrix
+ def _fix(self, mtx, avg, tlen, center=True):
+ """
+ Returns a triple containing the covariance matrix, the average and
+ the number of observations.
+ :param mtx:
+ :param center:
+ :return:
+ """
+ mtx /= tlen - 1
+
+ # Substract the mean
+ if center:
+ avg_mtx = torch.ger(avg, avg)
+ avg_mtx /= tlen * (tlen - 1)
+ mtx -= avg_mtx
+ # end if
+
+ # Fix the average
+ avg /= tlen
+
+ return mtx, avg, tlen
+ # end fix
+
+ # Update covariance matrix
+ def _update_cov_matrix(self, x):
+ """
+ Update covariance matrix
+ :param x:
+ :return:
+ """
+ # Init
+ if self.xTx is None:
+ self._init_internals()
+ # end if
+
+ # Update
+ self.xTx.data.add_(x.t().mm(x).data)
+ self.xTx_avg.add_(torch.sum(x, dim=0))
+ self.tlen += x.size(0)
+ # end _update_cov_matrix
+
+ # Initialize covariance
+ def _init_cov_matrix(self):
+ """
+ Initialize covariance matrix
+ :return:
+ """
+ self.xTx.data = torch.zeros(self.input_dim, self.input_dim)
+ self.xTx_avg.data = torch.zeros(self.input_dim)
+ # end _init_cov_matrix
+
+ # Initialize internals
+ def _init_internals(self):
+ """
+ Initialize internals
+ :param x:
+ :return:
+ """
+ # Init covariance matrix
+ self._init_cov_matrix()
+ # end _init_internals
+
+ # Add constant
+ def _add_constant(self, x):
+ """
+ Add constant
+ :param x:
+ :return:
+ """
+ bias = Variable(torch.ones((x.size()[0], x.size()[1], 1)), requires_grad=False)
+ return torch.cat((bias, x), dim=2)
+ # end _add_constant
+
+# end PCACell
diff --git a/ESN/EchoTorch-master/echotorch/nn/RRCell.py b/ESN/EchoTorch-master/echotorch/nn/RRCell.py
new file mode 100644
index 0000000..e43f92f
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/nn/RRCell.py
@@ -0,0 +1,180 @@
+# -*- coding: utf-8 -*-
+#
+# File : echotorch/nn/ESN.py
+# Description : An Echo State Network module.
+# Date : 26th of January, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti
+
+"""
+Created on 26 January 2018
+@author: Nils Schaetti
+"""
+
+# Imports
+import torch.sparse
+import torch
+import torch.nn as nn
+from torch.autograd import Variable
+
+
+# Ridge Regression cell
+class RRCell(nn.Module):
+ """
+ Ridge Regression cell
+ """
+
+ # Constructor
+ def __init__(self, input_dim, output_dim, ridge_param=0.0, feedbacks=False, with_bias=True, learning_algo='inv'):
+ """
+ Constructor
+ :param input_dim: Inputs dimension.
+ :param output_dim: Reservoir size
+ """
+ super(RRCell, self).__init__()
+
+ # Properties
+ self.input_dim = input_dim
+ self.output_dim = output_dim
+ self.ridge_param = ridge_param
+ self.feedbacks = feedbacks
+ self.with_bias = with_bias
+ self.learning_algo = learning_algo
+
+ # Size
+ if self.with_bias:
+ self.x_size = input_dim + 1
+ else:
+ self.x_size = input_dim
+ # end if
+
+ # Set it as buffer
+ self.register_buffer('xTx', Variable(torch.zeros(self.x_size, self.x_size), requires_grad=False))
+ self.register_buffer('xTy', Variable(torch.zeros(self.x_size, output_dim), requires_grad=False))
+ self.register_buffer('w_out', Variable(torch.zeros(1, input_dim), requires_grad=False))
+ # end __init__
+
+ ###############################################
+ # PROPERTIES
+ ###############################################
+
+ ###############################################
+ # PUBLIC
+ ###############################################
+
+ # Reset learning
+ def reset(self):
+ """
+ Reset learning
+ :return:
+ """
+ """self.xTx.data = torch.zeros(self.x_size, self.x_size)
+ self.xTy.data = torch.zeros(self.x_size, self.output_dim)
+ self.w_out.data = torch.zeros(1, self.input_dim)"""
+ self.xTx.data.fill_(0.0)
+ self.xTy.data.fill_(0.0)
+ self.w_out.data.fill_(0.0)
+
+ # Training mode again
+ self.train(True)
+ # end reset
+
+ # Output matrix
+ def get_w_out(self):
+ """
+ Output matrix
+ :return:
+ """
+ return self.w_out
+ # end get_w_out
+
+ # Forward
+ def forward(self, x, y=None):
+ """
+ Forward
+ :param x: Input signal.
+ :param y: Target outputs
+ :return: Output or hidden states
+ """
+ # Batch size
+ batch_size = x.size()[0]
+
+ # Time length
+ time_length = x.size()[1]
+
+ # Add bias
+ if self.with_bias:
+ x = self._add_constant(x)
+ # end if
+
+ # Learning algo
+ if self.training:
+ for b in range(batch_size):
+ self.xTx.data.add_(x[b].t().mm(x[b]).data)
+ self.xTy.data.add_(x[b].t().mm(y[b]).data)
+ # end for
+ return x
+ elif not self.training:
+ # Outputs
+ outputs = Variable(torch.zeros(batch_size, time_length, self.output_dim), requires_grad=False)
+ outputs = outputs.cuda() if self.w_out.is_cuda else outputs
+
+ # For each batch
+ for b in range(batch_size):
+ outputs[b] = torch.mm(x[b], self.w_out)
+ # end for
+
+ return outputs
+ # end if
+ # end forward
+
+ # Finish training
+ def finalize(self):
+ """
+ Finalize training with LU factorization or Pseudo-inverse
+ """
+ if self.learning_algo == 'inv':
+ inv_xTx = self.xTx.inverse()
+ # inv_xTx = torch.inverse(self.xTx + self.ridge_param * torch.eye(self._input_dim + self.with_bias))
+ self.w_out.data = torch.mm(inv_xTx, self.xTy).data
+ else:
+ self.w_out.data = torch.gesv(self.xTy, self.xTx + torch.eye(self.esn_cell.output_dim).mul(self.ridge_param)).data
+ # end if
+
+ # Not in training mode anymore
+ self.train(False)
+ # end finalize
+
+ ###############################################
+ # PRIVATE
+ ###############################################
+
+ # Add constant
+ def _add_constant(self, x):
+ """
+ Add constant
+ :param x:
+ :return:
+ """
+ if x.is_cuda:
+ bias = Variable(torch.ones((x.size()[0], x.size()[1], 1)).cuda(), requires_grad=False)
+ else:
+ bias = Variable(torch.ones((x.size()[0], x.size()[1], 1)), requires_grad=False)
+ # end if
+ return torch.cat((bias, x), dim=2)
+ # end _add_constant
+
+# end RRCell
diff --git a/ESN/EchoTorch-master/echotorch/nn/SFACell.py b/ESN/EchoTorch-master/echotorch/nn/SFACell.py
new file mode 100644
index 0000000..c018b55
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/nn/SFACell.py
@@ -0,0 +1,342 @@
+# -*- coding: utf-8 -*-
+#
+# File : echotorch/nn/ESN.py
+# Description : An Echo State Network module.
+# Date : 26th of January, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti
+
+"""
+Created on 26 January 2018
+@author: Nils Schaetti
+"""
+
+# Imports
+import torch.sparse
+import torch
+import torch.nn as nn
+import numpy as np
+from past.utils import old_div
+
+
+# Slow Feature Analysis layer
+class SFACell(nn.Module):
+ """
+ Extract the slowly varying components from input data.
+ """
+
+ # Type keys
+ _type_keys = ['f', 'd', 'F', 'D']
+
+ # Type conv
+ _type_conv = {('f', 'd'): 'd', ('f', 'F'): 'F', ('f', 'D'): 'D',
+ ('d', 'F'): 'D', ('d', 'D'): 'D',
+ ('F', 'd'): 'D', ('F', 'D'): 'D'}
+
+ # Constructor
+ def __init__(self, input_dim, output_dim, include_last_sample=True, rank_deficit_method='none', use_bias=True):
+ """
+ Constructor
+ :param input_dim: Input dimension
+ :param output_dim: Number of slow feature
+ :param include_last_sample: If set to False, the training method discards the last sample in every chunk during training when calculating the matrix.
+ :param rank_deficit_method: 'none', 'reg', 'pca', 'svd', 'auto'.
+ """
+ super(SFACell, self).__init__()
+ self.include_last_sample = include_last_sample
+ self.use_bias = use_bias
+ self.input_dim = input_dim
+ self.output_dim = output_dim
+
+ # Initialie the two covariance matrices one for
+ # the input data, and the other for the derivatives.
+ self.xTx = torch.zeros(input_dim, input_dim)
+ self.xTx_avg = torch.zeros(input_dim)
+ self.dxTdx = torch.zeros(input_dim, input_dim)
+ self.dxTdx_avg = torch.zeros(input_dim)
+
+ # Set routine for eigenproblem
+ self.set_rank_deficit_method(rank_deficit_method)
+ self.rank_threshold = 1e-12
+ self.rank_deficit = 0
+
+ # Will be set after training
+ self.d = None
+ self.sf = None
+ self.avg = None
+ self.bias = None
+ self.tlen = None
+ # end __init__
+
+ ###############################################
+ # PROPERTIES
+ ###############################################
+
+ ###############################################
+ # PUBLIC
+ ###############################################
+
+ # Time derivative
+ def time_derivative(self, x):
+ """
+ Compute the approximation of time derivative
+ :param x:
+ :return:
+ """
+ return x[1:, :] - x[:-1, :]
+ # end time_derivative
+
+ # Reset learning
+ def reset(self):
+ """
+ Reset learning
+ :return:
+ """
+ # Training mode again
+ self.train(True)
+ # end reset
+
+ # Forward
+ def forward(self, x):
+ """
+ Forward
+ :param x: Input signal.
+ :return: Output or hidden states
+ """
+ # For each batch
+ for b in np.arange(0, x.size(0)):
+ # If training or execution
+ if self.training:
+ # Last sample
+ last_sample_index = None if self.include_last_sample else -1
+
+ # Sample and derivative
+ xs = x[b, :last_sample_index, :]
+ xd = self.time_derivative(x[b])
+
+ # Update covariance matrix
+ self.xTx.data.add(xs.t().mm(xs))
+ self.dxTdx.data.add(xd.t().mm(xd))
+
+ # Update average
+ self.xTx_avg += torch.sum(xs, axis=1)
+ self.dxTdx_avg += torch.sum(xd, axis=1)
+
+ # Length
+ self.tlen += x.size(0)
+ else:
+ x[b].mv(self.sf) - self.bias
+ # end if
+ # end if
+ return x
+ # end forward
+
+ # Finish training
+ def finalize(self):
+ """
+ Finalize training with LU factorization or Pseudo-inverse
+ """
+ # Covariance
+ self.xTx, self.xTx_avg, self.tlen = self._fix(self.xtX, self.xTx_avg, self.tlen, center=True)
+ self.dxTdx, self.dxTdx_avg, self.tlen = self._fix(self.dxTdx, self.dxTdx_avg, self.tlen, center=False)
+
+ # Range
+ rng = (1, self.output_dim)
+
+ # Resolve system
+ self.d, self.sf = self._symeig(
+ self.dxTdx, self.xTx, rng
+ )
+ d = self.d
+
+ # We want only positive values
+ if torch.min(d) < 0:
+ raise Exception(u"Got negative values in {}".format(d))
+ # end if
+
+ # Delete covariance matrix
+ del self.xTx
+ del self.dxTdx
+
+ # Store bias
+ self.bias = self.xTx_avg * self.sf
+ # end finalize
+
+ ###############################################
+ # PRIVATE
+ ###############################################
+
+ # Solve standard and generalized eigenvalue problem for symmetric (hermitian) definite positive matrices
+ def _symeig(self, A, B, range, eigenvectors=True):
+ """
+ Solve standard and generalized eigenvalue problem for symmetric (hermitian) definite positive matrices.
+ :param A: An N x N matrix
+ :param B: An N x N matrix
+ :param range: (lo, hi), the indexes of smallest and largest eigenvalues to be returned.
+ :param eigenvectors: Return eigenvalues and eigenvector or only engeivalues
+ :return: w, the eigenvalues and Z the eigenvectors
+ """
+ # To numpy
+ A = A.numpy()
+ B = B.numpy()
+
+ # Type
+ dtype = np.dtype()
+
+ # Make B the identity matrix
+ wB, ZB = np.linalg.eigh(B)
+
+ # Check eigenvalues
+ self._assert_eigenvalues_real(wB)
+
+ # No negative values
+ if wB.real.min() < 0:
+ raise Exception(u"Got negative eigenvalues: {}".format(wB))
+ # end if
+
+ # Old division
+ ZB = old_div(ZB.real, np.sqrt(wB.real))
+
+ # A = ZB^T * A * ZB
+ A = np.matmul(np.matmul(ZB.T, A), ZB)
+
+ # Diagonalize A
+ w, ZA = np.linalg.eigh(A)
+ Z = np.matmul(ZB, ZA)
+
+ # Check eigenvalues
+ self._assert_eigenvalues_real(w, dtype)
+
+ # Read
+ w = w.real
+ Z = Z.real
+
+ # Sort
+ idx = w.argsort()
+ w = w.take(idx)
+ Z = Z.take(idx, axis=1)
+
+ # Sanitize range
+ n = A.shape[0]
+ lo, hi = range
+ if lo < 1:
+ lo = 1
+ # end if
+ if lo > n:
+ lo = n
+ # end if
+ if hi > n:
+ hi = n
+ # end if
+ if lo > hi:
+ lo, hi = hi, lo
+ # end if
+
+ # Get values
+ Z = Z[:, lo-1:hi]
+ w = w[lo-1:hi]
+
+ # Cast
+ w = self.refcast(w, dtype)
+ Z = self.refcast(Z, dtype)
+
+ # Eigenvectors
+ if eigenvectors:
+ return torch.FloatTensor(w), torch.FloatTensor(Z)
+ else:
+ return torch.FloatTensor(w)
+ # end if
+ # end _symeig
+
+ # Ref cast
+ def refcast(self, array, dtype):
+ """
+ Cast the array to dtype only if necessary, otherwise return a reference.
+ """
+ dtype = np.dtype(dtype)
+ if array.dtype == dtype:
+ return array
+ return array.astype(dtype)
+ # end refcast
+
+ # Check eigenvalues
+ def _assert_eigenvalues_real(self, w, dtype):
+ """
+ Check eigenvalues
+ :param w:
+ :param dtype:
+ :return:
+ """
+ tol = np.finfo(dtype.type).eps * 100
+ if abs(w.imag).max() > tol:
+ err = "Some eigenvalues have significant imaginary part: %s " % str(w)
+ raise Exception(err)
+ # end if
+ # end _assert_eigenvalues_real
+
+ # Greatest common type
+ def _greatest_common_dtype(self, alist):
+ """
+ Apply conversion rules to find the common conversion type
+ dtype 'd' is default for 'i' or unknown types
+ (known types: 'f','d','F','D').
+ """
+ dtype = 'f'
+ for array in alist:
+ if array is None:
+ continue
+ tc = array.dtype.char
+ if tc not in self._type_keys:
+ tc = 'd'
+ transition = (dtype, tc)
+ if transition in self._type_conv:
+ dtype = self._type_conv[transition]
+ return dtype
+ # end _greatest_common_dtype
+
+ # Fix covariance matrix
+ def _fix(self, mtx, avg, tlen, center=True):
+ """
+ Returns a triple containing the covariance matrix, the average and
+ the number of observations.
+ :param mtx:
+ :param center:
+ :return:
+ """
+ if self.use_bias:
+ mtx /= tlen
+ else:
+ mtx /= tlen - 1
+ # end if
+
+ # Substract the mean
+ if center:
+ avg_mtx = np.outer(avg, avg)
+ if self.use_bias:
+ avg_mtx /= tlen * tlen
+ else:
+ avg_mtx /= tlen * (tlen - 1)
+ # end if
+ mtx -= avg_mtx
+ # end if
+
+ # Fix the average
+ avg /= tlen
+
+ return mtx, avg, tlen
+ # end fix
+
+# end SFACell
diff --git a/ESN/EchoTorch-master/echotorch/nn/StackedESN.py b/ESN/EchoTorch-master/echotorch/nn/StackedESN.py
new file mode 100644
index 0000000..f268fbc
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/nn/StackedESN.py
@@ -0,0 +1,303 @@
+# -*- coding: utf-8 -*-
+#
+# File : echotorch/nn/ESN.py
+# Description : An Echo State Network module.
+# Date : 26th of January, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti, University of Neuchâtel
+
+"""
+Created on 26 January 2018
+@author: Nils Schaetti
+"""
+
+# Imports
+import torch.sparse
+import torch
+import torch.nn as nn
+import echotorch.utils
+from torch.autograd import Variable
+from . import LiESNCell
+from .RRCell import RRCell
+from .ESNCell import ESNCell
+import numpy as np
+
+
+# Stacked Echo State Network module
+class StackedESN(nn.Module):
+ """
+ Stacked Echo State Network module
+ """
+
+ # Constructor
+ def __init__(self, input_dim, hidden_dim, output_dim, leaky_rate=1.0, spectral_radius=0.9, bias_scaling=0,
+ input_scaling=1.0, w=None, w_in=None, w_bias=None, sparsity=None, input_set=(1.0, -1.0),
+ w_sparsity=None, nonlin_func=torch.tanh, learning_algo='inv', ridge_param=0.0, with_bias=True):
+ """
+ Constructor
+
+ Arguments:
+ :param input_dim: Inputs dimension.
+ :param hidden_dim: Hidden layer dimension
+ :param output_dim: Reservoir size
+ :param spectral_radius: Reservoir's spectral radius
+ :param bias_scaling: Scaling of the bias, a constant input to each neuron (default: 0, no bias)
+ :param input_scaling: Scaling of the input weight matrix, default 1.
+ :param w: Internation weights matrix
+ :param w_in: Input-reservoir weights matrix
+ :param w_bias: Bias weights matrix
+ :param w_fdb: Feedback weights matrix
+ :param sparsity:
+ :param input_set:
+ :param w_sparsity:
+ :param nonlin_func: Reservoir's activation function (tanh, sig, relu)
+ :param learning_algo: Which learning algorithm to use (inv, LU, grad)
+ """
+ super(StackedESN, self).__init__()
+
+ # Properties
+ self.n_layers = len(hidden_dim)
+ self.esn_layers = list()
+
+ # Number of features
+ self.n_features = 0
+
+ # Recurrent layer
+ for n in range(self.n_layers):
+ # Input dim
+ layer_input_dim = input_dim if n == 0 else hidden_dim[n-1]
+
+ # Final state size
+ self.n_features += hidden_dim[n]
+
+ # Parameters
+ layer_leaky_rate = leaky_rate[n] if type(leaky_rate) is list or type(leaky_rate) is np.ndarray else leaky_rate
+ layer_spectral_radius = spectral_radius[n] if type(spectral_radius) is list or type(spectral_radius) is np.ndarray else spectral_radius
+ layer_bias_scaling = bias_scaling[n] if type(bias_scaling) is list or type(bias_scaling) is np.ndarray else bias_scaling
+ layer_input_scaling = input_scaling[n] if type(input_scaling) is list or type(input_scaling) is np.ndarray else input_scaling
+
+ # W
+ if type(w) is torch.Tensor and w.dim() == 3:
+ layer_w = w[n]
+ elif type(w) is torch.Tensor:
+ layer_w = w
+ else:
+ layer_w = None
+ # end if
+
+ # W in
+ if type(w_in) is torch.Tensor and w_in.dim() == 3:
+ layer_w_in = w_in[n]
+ elif type(w_in) is torch.Tensor:
+ layer_w_in = w_in
+ else:
+ layer_w_in = None
+ # end if
+
+ # W bias
+ if type(w_bias) is torch.Tensor and w_bias.dim() == 2:
+ layer_w_bias = w_bias[n]
+ elif type(w_bias) is torch.Tensor:
+ layer_w_bias = w_bias
+ else:
+ layer_w_bias = None
+ # end if
+
+ # Parameters
+ layer_sparsity = sparsity[n] if type(sparsity) is list or type(sparsity) is np.ndarray else sparsity
+ layer_input_set = input_set[n] if type(input_set) is list or type(input_set) is np.ndarray else input_set
+ layer_w_sparsity = w_sparsity[n] if type(w_sparsity) is list or type(w_sparsity) is np.ndarray else w_sparsity
+ layer_nonlin_func = nonlin_func[n] if type(nonlin_func) is list or type(nonlin_func) is np.ndarray else nonlin_func
+
+ # Create LiESN cell
+ self.esn_layers.append(LiESNCell(
+ layer_leaky_rate, False, layer_input_dim, hidden_dim[n], layer_spectral_radius, layer_bias_scaling,
+ layer_input_scaling, layer_w, layer_w_in, layer_w_bias, None, layer_sparsity, layer_input_set,
+ layer_w_sparsity, layer_nonlin_func
+ ))
+ # end for
+
+ # Output layer
+ self.output = RRCell(self.n_features, output_dim, ridge_param, False, with_bias, learning_algo)
+ # end __init__
+
+ ###############################################
+ # PROPERTIES
+ ###############################################
+
+ # Hidden layer
+ @property
+ def hidden(self):
+ """
+ Hidden layer
+ :return:
+ """
+ # Hidden states
+ hidden_states = list()
+
+ # For each ESN
+ for esn_cell in self.esn_layers:
+ hidden_states.append(esn_cell.hidden)
+ # end for
+
+ return hidden_states
+ # end hidden
+
+ # Hidden weight matrix
+ @property
+ def w(self):
+ """
+ Hidden weight matrix
+ :return:
+ """
+ # W
+ w_mtx = list()
+
+ # For each ESN
+ for esn_cell in self.esn_layers:
+ w_mtx.append(esn_cell.w)
+ # end for
+
+ return w_mtx
+ # end w
+
+ # Input matrix
+ @property
+ def w_in(self):
+ """
+ Input matrix
+ :return:
+ """
+ # W in
+ win_mtx = list()
+
+ # For each ESN
+ for esn_cell in self.esn_layers:
+ win_mtx.append(esn_cell.w_in)
+ # end for
+
+ return win_mtx
+ # end w_in
+
+ ###############################################
+ # PUBLIC
+ ###############################################
+
+ # Reset learning
+ def reset(self):
+ """
+ Reset learning
+ :return:
+ """
+ self.output.reset()
+
+ # Training mode again
+ self.train(True)
+ # end reset
+
+ # Output matrix
+ def get_w_out(self):
+ """
+ Output matrix
+ :return:
+ """
+ return self.output.w_out
+ # end get_w_out
+
+ # Forward
+ def forward(self, u, y=None):
+ """
+ Forward
+ :param u: Input signal.
+ :param y: Target outputs
+ :return: Output or hidden states
+ """
+ # Hidden states
+ hidden_states = Variable(torch.zeros(u.size(0), u.size(1), self.n_features))
+
+ # Compute hidden states
+ pos = 0
+ for index, esn_cell in enumerate(self.esn_layers):
+ layer_dim = esn_cell.output_dim
+ if index == 0:
+ last_hidden_states = esn_cell(u)
+ else:
+ last_hidden_states = esn_cell(last_hidden_states)
+ # end if
+
+ # Update
+ hidden_states[:, :, pos:pos + layer_dim] = last_hidden_states
+
+ # Next position
+ pos += layer_dim
+ # end for
+
+ # Learning algo
+ return self.output(hidden_states, y)
+ # end forward
+
+ # Finish training
+ def finalize(self):
+ """
+ Finalize training with LU factorization
+ """
+ # Finalize output training
+ self.output.finalize()
+
+ # Not in training mode anymore
+ self.train(False)
+ # end finalize
+
+ # Reset hidden layer
+ def reset_hidden(self):
+ """
+ Reset hidden layer
+ :return:
+ """
+ self.esn_cell.reset_hidden()
+ # end reset_hidden
+
+ # Get W's spectral radius
+ def get_spectral_radius(self):
+ """
+ Get W's spectral radius
+ :return: W's spectral radius
+ """
+ return self.esn_cell.get_spectral_raduis()
+ # end spectral_radius
+
+ ############################################
+ # STATIC
+ ############################################
+
+ # Generate W matrices for a stacked ESN
+ @staticmethod
+ def generate_ws(n_layers, reservoir_size, w_sparsity):
+ """
+ Generate W matrices for a stacked ESN
+ :param n_layers:
+ :param reservoir_size:
+ :param w_sparsity:
+ :return:
+ """
+ ws = torch.FloatTensor(n_layers, reservoir_size, reservoir_size)
+ for i in range(n_layers):
+ ws[i] = ESNCell.generate_w(reservoir_size, w_sparsity)
+ # end for
+ return ws
+ # end for
+
+# end ESNCell
diff --git a/ESN/EchoTorch-master/echotorch/nn/__init__.py b/ESN/EchoTorch-master/echotorch/nn/__init__.py
new file mode 100644
index 0000000..b5b2f8c
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/nn/__init__.py
@@ -0,0 +1,23 @@
+# -*- coding: utf-8 -*-
+#
+
+# Imports
+from .BDESN import BDESN
+from .BDESNPCA import BDESNPCA
+from .BDESNCell import BDESNCell
+from .ESNCell import ESNCell
+from .ESN import ESN
+from .LiESNCell import LiESNCell
+from .LiESN import LiESN
+from .GatedESN import GatedESN
+from .ICACell import ICACell
+from .Identity import Identity
+from .PCACell import PCACell
+from .RRCell import RRCell
+from .SFACell import SFACell
+from .StackedESN import StackedESN
+
+__all__ = [
+ 'BDESN', 'BDESNPCA', 'BDESNCell', 'ESNCell', 'ESN', 'LiESNCell', 'LiESN', 'GatedESN', 'ICACell', 'Identity',
+ 'PCACell', 'RRCell', 'SFACell', 'StackedESN'
+]
diff --git a/ESN/EchoTorch-master/echotorch/transforms/__init__.py b/ESN/EchoTorch-master/echotorch/transforms/__init__.py
new file mode 100644
index 0000000..010c46b
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/transforms/__init__.py
@@ -0,0 +1,9 @@
+# -*- coding: utf-8 -*-
+#
+
+# Imports
+import text
+
+__all__ = [
+ 'text', 'images'
+]
diff --git a/ESN/EchoTorch-master/echotorch/transforms/text/Character.py b/ESN/EchoTorch-master/echotorch/transforms/text/Character.py
new file mode 100644
index 0000000..77ac90a
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/transforms/text/Character.py
@@ -0,0 +1,131 @@
+# -*- coding: utf-8 -*-
+#
+
+# Imports
+import torch
+from .Transformer import Transformer
+
+
+# Transform text to character vectors
+class Character(Transformer):
+ """
+ Transform text to character vectors
+ """
+
+ # Constructor
+ def __init__(self, uppercase=False, gram_to_ix=None, start_ix=0, fixed_length=-1):
+ """
+ Constructor
+ """
+ # Gram to ix
+ if gram_to_ix is not None:
+ self.gram_count = len(gram_to_ix.keys())
+ self.gram_to_ix = gram_to_ix
+ else:
+ self.gram_count = start_ix
+ self.gram_to_ix = dict()
+ # end if
+
+ # Ix to gram
+ self.ix_to_gram = dict()
+ if gram_to_ix is not None:
+ for gram in gram_to_ix.keys():
+ self.ix_to_gram[gram_to_ix[gram]] = gram
+ # end for
+ # end if
+
+ # Properties
+ self.uppercase = uppercase
+ self.fixed_length = fixed_length
+
+ # Super constructor
+ super(Character, self).__init__()
+ # end __init__
+
+ ##############################################
+ # Public
+ ##############################################
+
+ ##############################################
+ # Properties
+ ##############################################
+
+ # Get the number of inputs
+ @property
+ def input_dim(self):
+ """
+ Get the number of inputs.
+ :return: The input size.
+ """
+ return 1
+ # end input_dim
+
+ # Vocabulary size
+ @property
+ def voc_size(self):
+ """
+ Vocabulary size
+ :return:
+ """
+ return self.gram_count
+ # end voc_size
+
+ ##############################################
+ # Private
+ ##############################################
+
+ # To upper
+ def to_upper(self, gram):
+ """
+ To upper
+ :param gram:
+ :return:
+ """
+ if not self.uppercase:
+ return gram.lower()
+ # end if
+ return gram
+ # end to_upper
+
+ ##############################################
+ # Override
+ ##############################################
+
+ # Convert a string
+ def __call__(self, text):
+ """
+ Convert a string to a ESN input
+ :param text: Text to convert
+ :return: Tensor of word vectors
+ """
+ # Add to voc
+ for i in range(len(text)):
+ gram = self.to_upper(text[i])
+ if gram not in self.gram_to_ix.keys():
+ self.gram_to_ix[gram] = self.gram_count
+ self.ix_to_gram[self.gram_count] = gram
+ self.gram_count += 1
+ # end if
+ # end for
+
+ # List of character to 2grams
+ text_idxs = [self.gram_to_ix[self.to_upper(text[i])] for i in range(len(text))]
+
+ # To long tensor
+ text_idxs = torch.LongTensor(text_idxs)
+
+ # Check length
+ if self.fixed_length != -1:
+ if text_idxs.size(0) > self.fixed_length:
+ text_idxs = text_idxs[:self.fixed_length]
+ elif text_idxs.size(0) < self.fixed_length:
+ zero_idxs = torch.LongTensor(self.fixed_length).fill_(0)
+ zero_idxs[:text_idxs.size(0)] = text_idxs
+ text_idxs = zero_idxs
+ # end if
+ # end if
+
+ return text_idxs, text_idxs.size(0)
+ # end convert
+
+# end FunctionWord
diff --git a/ESN/EchoTorch-master/echotorch/transforms/text/Character2Gram.py b/ESN/EchoTorch-master/echotorch/transforms/text/Character2Gram.py
new file mode 100644
index 0000000..350d13b
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/transforms/text/Character2Gram.py
@@ -0,0 +1,140 @@
+# -*- coding: utf-8 -*-
+#
+
+# Imports
+import torch
+from .Transformer import Transformer
+import numpy as np
+
+
+# Transform text to character 2-gram
+class Character2Gram(Transformer):
+ """
+ Transform text to character 2-grams
+ """
+
+ # Constructor
+ def __init__(self, uppercase=False, gram_to_ix=None, start_ix=0, fixed_length=-1, overlapse=True):
+ """
+ Constructor
+ """
+ # Gram to ix
+ if gram_to_ix is not None:
+ self.gram_count = len(gram_to_ix.keys())
+ self.gram_to_ix = gram_to_ix
+ else:
+ self.gram_count = start_ix
+ self.gram_to_ix = dict()
+ # end if
+
+ # Ix to gram
+ self.ix_to_gram = dict()
+ if gram_to_ix is not None:
+ for gram in gram_to_ix.keys():
+ self.ix_to_gram[gram_to_ix[gram]] = gram
+ # end for
+ # end if
+
+ # Properties
+ self.uppercase = uppercase
+ self.fixed_length = fixed_length
+ self.overlapse = overlapse
+
+ # Super constructor
+ super(Character2Gram, self).__init__()
+ # end __init__
+
+ ##############################################
+ # Public
+ ##############################################
+
+ ##############################################
+ # Properties
+ ##############################################
+
+ # Get the number of inputs
+ @property
+ def input_dim(self):
+ """
+ Get the number of inputs.
+ :return: The input size.
+ """
+ return 1
+ # end input_dim
+
+ # Vocabulary size
+ @property
+ def voc_size(self):
+ """
+ Vocabulary size
+ :return:
+ """
+ return self.gram_count
+ # end voc_size
+
+ ##############################################
+ # Private
+ ##############################################
+
+ # To upper
+ def to_upper(self, gram):
+ """
+ To upper
+ :param gram:
+ :return:
+ """
+ if not self.uppercase:
+ return gram.lower()
+ # end if
+ return gram
+ # end to_upper
+
+ ##############################################
+ # Override
+ ##############################################
+
+ # Convert a string
+ def __call__(self, text):
+ """
+ Convert a string to a ESN input
+ :param text: Text to convert
+ :return: Tensor of word vectors
+ """
+ # Step
+ if self.overlapse:
+ step = 1
+ else:
+ step = 2
+ # end if
+
+ # Add to voc
+ for i in np.arange(0, len(text) - 1, step):
+ gram = self.to_upper(text[i] + text[i+1])
+ if gram not in self.gram_to_ix.keys():
+ self.gram_to_ix[gram] = self.gram_count
+ self.ix_to_gram[self.gram_count] = gram
+ self.gram_count += 1
+ # end if
+ # end for
+
+ # List of character to 2grams
+ text_idxs = [self.gram_to_ix[self.to_upper(text[i] + text[i+1])] for i in range(len(text)-1)]
+
+ # To long tensor
+ text_idxs = torch.LongTensor(text_idxs)
+
+ # Check length
+ if self.fixed_length != -1:
+ if text_idxs.size(0) > self.fixed_length:
+ text_idxs = text_idxs[:self.fixed_length]
+ elif text_idxs.size(0) < self.fixed_length:
+ zero_idxs = torch.LongTensor(self.fixed_length).fill_(0)
+ zero_idxs[:text_idxs.size(0)] = text_idxs
+ text_idxs = zero_idxs
+ # end if
+ # end if
+
+ return text_idxs, text_idxs.size(0)
+ # end convert
+
+# end Character2Gram
diff --git a/ESN/EchoTorch-master/echotorch/transforms/text/Character3Gram.py b/ESN/EchoTorch-master/echotorch/transforms/text/Character3Gram.py
new file mode 100644
index 0000000..b78e66e
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/transforms/text/Character3Gram.py
@@ -0,0 +1,140 @@
+# -*- coding: utf-8 -*-
+#
+
+# Imports
+import torch
+from .Transformer import Transformer
+import numpy as np
+
+
+# Transform text to character 3-gram
+class Character3Gram(Transformer):
+ """
+ Transform text to character 3-grams
+ """
+
+ # Constructor
+ def __init__(self, uppercase=False, gram_to_ix=None, start_ix=0, fixed_length=-1, overlapse=True):
+ """
+ Constructor
+ """
+ # Gram to ix
+ if gram_to_ix is not None:
+ self.gram_count = len(gram_to_ix.keys())
+ self.gram_to_ix = gram_to_ix
+ else:
+ self.gram_count = start_ix
+ self.gram_to_ix = dict()
+ # end if
+
+ # Ix to gram
+ self.ix_to_gram = dict()
+ if gram_to_ix is not None:
+ for gram in gram_to_ix.keys():
+ self.ix_to_gram[gram_to_ix[gram]] = gram
+ # end for
+ # end if
+
+ # Properties
+ self.uppercase = uppercase
+ self.fixed_length = fixed_length
+ self.overlapse = overlapse
+
+ # Super constructor
+ super(Character3Gram, self).__init__()
+ # end __init__
+
+ ##############################################
+ # Public
+ ##############################################
+
+ ##############################################
+ # Properties
+ ##############################################
+
+ # Get the number of inputs
+ @property
+ def input_dim(self):
+ """
+ Get the number of inputs.
+ :return: The input size.
+ """
+ return 1
+ # end input_dim
+
+ # Vocabulary size
+ @property
+ def voc_size(self):
+ """
+ Vocabulary size
+ :return:
+ """
+ return self.gram_count
+ # end voc_size
+
+ ##############################################
+ # Private
+ ##############################################
+
+ # To upper
+ def to_upper(self, gram):
+ """
+ To upper
+ :param gram:
+ :return:
+ """
+ if not self.uppercase:
+ return gram.lower()
+ # end if
+ return gram
+ # end to_upper
+
+ ##############################################
+ # Override
+ ##############################################
+
+ # Convert a string
+ def __call__(self, text):
+ """
+ Convert a string to a ESN input
+ :param text: Text to convert
+ :return: Tensor of word vectors
+ """
+ # Step
+ if self.overlapse:
+ step = 1
+ else:
+ step = 3
+ # end if
+
+ # Add to voc
+ for i in np.arange(0, len(text) - 2, step):
+ gram = self.to_upper(text[i] + text[i+1] + text[i+2])
+ if gram not in self.gram_to_ix.keys():
+ self.gram_to_ix[gram] = self.gram_count
+ self.ix_to_gram[self.gram_count] = gram
+ self.gram_count += 1
+ # end if
+ # end for
+
+ # List of character to 3 grams
+ text_idxs = [self.gram_to_ix[self.to_upper(text[i] + text[i+1] + text[i+2])] for i in range(len(text)-2)]
+
+ # To long tensor
+ text_idxs = torch.LongTensor(text_idxs)
+
+ # Check length
+ if self.fixed_length != -1:
+ if text_idxs.size(0) > self.fixed_length:
+ text_idxs = text_idxs[:self.fixed_length]
+ elif text_idxs.size(0) < self.fixed_length:
+ zero_idxs = torch.LongTensor(self.fixed_length).fill_(0)
+ zero_idxs[:text_idxs.size(0)] = text_idxs
+ text_idxs = zero_idxs
+ # end if
+ # end if
+
+ return text_idxs, text_idxs.size(0)
+ # end convert
+
+# end Character3Gram
diff --git a/ESN/EchoTorch-master/echotorch/transforms/text/Compose.py b/ESN/EchoTorch-master/echotorch/transforms/text/Compose.py
new file mode 100644
index 0000000..ab2e5e2
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/transforms/text/Compose.py
@@ -0,0 +1,68 @@
+# -*- coding: utf-8 -*-
+#
+
+# Imports
+from .Transformer import Transformer
+
+
+# Compose multiple transformations
+class Compose(Transformer):
+ """
+ Compose multiple transformations
+ """
+
+ # Constructor
+ def __init__(self, transforms):
+ """
+ Constructor
+ """
+ # Properties
+ self.transforms = transforms
+
+ # Super constructor
+ super(Compose, self).__init__()
+ # end __init__
+
+ ##############################################
+ # Public
+ ##############################################
+
+ ##############################################
+ # Properties
+ ##############################################
+
+ # Get the number of inputs
+ @property
+ def input_dim(self):
+ """
+ Get the number of inputs.
+ :return: The input size.
+ """
+ return self.transforms[-1].input_dim
+ # end input_dim
+
+ ##############################################
+ # Override
+ ##############################################
+
+ # Convert a string
+ def __call__(self, text):
+ """
+ Convert a string to a ESN input
+ :param text: Text to convert
+ :return: Tensor of word vectors
+ """
+ # For each transform
+ for index, transform in enumerate(self.transforms):
+ # Transform
+ if index == 0:
+ outputs, size = transform(text)
+ else:
+ outputs, size = transform(outputs)
+ # end if
+ # end for
+
+ return outputs, size
+ # end convert
+
+# end Compose
diff --git a/ESN/EchoTorch-master/echotorch/transforms/text/Embedding.py b/ESN/EchoTorch-master/echotorch/transforms/text/Embedding.py
new file mode 100644
index 0000000..9b03152
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/transforms/text/Embedding.py
@@ -0,0 +1,104 @@
+# -*- coding: utf-8 -*-
+#
+
+# Imports
+import gensim
+from gensim.utils import tokenize
+import torch
+import numpy as np
+
+
+# Transform text to vectors with embedding
+class Embedding(object):
+ """
+ Transform text to vectors with embedding
+ """
+
+ # Constructor
+ def __init__(self, weights):
+ """
+ Constructor
+ :param weights: Embedding weight matrix
+ """
+ # Properties
+ self.weights = weights
+ self.voc_size = weights.size(0)
+ self.embedding_dim = weights.size(1)
+ # end __init__
+
+ ##############################################
+ # Properties
+ ##############################################
+
+ # Get the number of inputs
+ @property
+ def input_dim(self):
+ """
+ Get the number of inputs
+ :return:
+ """
+ return self.embedding_dim
+ # end input_dim
+
+ ##############################################
+ # Override
+ ##############################################
+
+ # Convert a string
+ def __call__(self, idxs):
+ """
+ Convert a strng
+ :param text:
+ :return:
+ """
+ # Inputs as tensor
+ inputs = torch.FloatTensor(1, self.embedding_dim)
+
+ # Start
+ start = True
+ count = 0.0
+
+ # OOV
+ zero = 0.0
+ self.oov = 0.0
+
+ # For each inputs
+ for i in range(idxs.size(0)):
+ # Get token ix
+ ix = idxs[i]
+
+ # Get vector
+ if ix < self.voc_size:
+ embedding_vector = self.weights[ix]
+ else:
+ embedding_vector = torch.zeros(self.embedding_dim)
+ # end if
+
+ # Test zero
+ if torch.sum(embedding_vector) == 0.0:
+ zero += 1.0
+ embedding_vector = np.zeros(self.input_dim)
+ # end if
+
+ # Start/continue
+ if not start:
+ inputs = torch.cat((inputs, torch.FloatTensor(embedding_vector).unsqueeze_(0)), dim=0)
+ else:
+ inputs = torch.FloatTensor(embedding_vector).unsqueeze_(0)
+ start = False
+ # end if
+ count += 1
+ # end for
+
+ # OOV
+ self.oov = zero / count * 100.0
+
+ return inputs, inputs.size()[0]
+ # end convert
+
+ ##############################################
+ # Static
+ ##############################################
+
+
+# end Embedding
diff --git a/ESN/EchoTorch-master/echotorch/transforms/text/FunctionWord.py b/ESN/EchoTorch-master/echotorch/transforms/text/FunctionWord.py
new file mode 100644
index 0000000..73383da
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/transforms/text/FunctionWord.py
@@ -0,0 +1,118 @@
+# -*- coding: utf-8 -*-
+#
+
+# Imports
+import torch
+import spacy
+from .Transformer import Transformer
+
+
+# Transform text to a function word vectors
+class FunctionWord(Transformer):
+ """
+ Transform text to character vectors
+ """
+
+ # Constructor
+ def __init__(self, model="en_core_web_lg"):
+ """
+ Constructor
+ :param model: Spacy's model to load.
+ """
+ # Super constructor
+ super(FunctionWord, self).__init__()
+
+ # Properties
+ self.model = model
+ self.nlp = spacy.load(model)
+ # end __init__
+
+ ##############################################
+ # Public
+ ##############################################
+
+ # Get tags
+ def get_tags(self):
+ """
+ Get tags.
+ :return: A tag list.
+ """
+ return [u"a", u"about", u"above", u"after", u"after", u"again", u"against", u"ago", u"ahead",
+ u"all",
+ u"almost", u"along", u"already", u"also", u"although", u"always", u"am", u"among", u"an",
+ u"and", u"any", u"are", u"aren't", u"around", u"as", u"at", u"away", u"backward",
+ u"backwards", u"be", u"because", u"before", u"behind", u"below", u"beneath", u"beside",
+ u"between", u"both", u"but", u"by", u"can", u"cannot", u"can't", u"cause", u"'cos",
+ u"could",
+ u"couldn't", u"'d", u"despite", u"did", u"didn't", u"do", u"does", u"doesn't", u"don't",
+ u"down", u"during", u"each", u"either", u"even", u"ever", u"every", u"except", u"for",
+ u"forward", u"from", u"had", u"hadn't", u"has", u"hasn't", u"have", u"haven't", u"he",
+ u"her", u"here", u"hers", u"herself", u"him", u"himself", u"his", u"how", u"however",
+ u"I",
+ u"if", u"in", u"inside", u"inspite", u"instead", u"into", u"is", u"isn't", u"it", u"its",
+ u"itself", u"just", u"'ll", u"least", u"less", u"like", u"'m", u"many", u"may",
+ u"mayn't",
+ u"me", u"might", u"mightn't", u"mine", u"more", u"most", u"much", u"must", u"mustn't",
+ u"my", u"myself", u"near", u"need", u"needn't", u"needs", u"neither", u"never", u"no",
+ u"none", u"nor", u"not", u"now", u"of", u"off", u"often", u"on", u"once", u"only",
+ u"onto",
+ u"or", u"ought", u"oughtn't", u"our", u"ours", u"ourselves", u"out", u"outside", u"over",
+ u"past", u"perhaps", u"quite", u"'re", u"rather", u"'s", u"seldom", u"several", u"shall",
+ u"shan't", u"she", u"should", u"shouldn't", u"since", u"so", u"some", u"sometimes",
+ u"soon",
+ u"than", u"that", u"the", u"their", u"theirs", u"them", u"themselves", u"then", u"there",
+ u"therefore", u"these", u"they", u"this", u"those", u"though", u"through", u"thus",
+ u"till",
+ u"to", u"together", u"too", u"towards", u"under", u"unless", u"until", u"up", u"upon",
+ u"us", u"used", u"usedn't", u"usen't", u"usually", u"'ve", u"very", u"was", u"wasn't",
+ u"we", u"well", u"were", u"weren't", u"what", u"when", u"where", u"whether", u"which",
+ u"while", u"who", u"whom", u"whose", u"why", u"will", u"with", u"without", u"won't",
+ u"would", u"wouldn't", u"yet", u"you", u"your", u"yours", u"yourself", u"yourselves", u"X"]
+ # end get_tags
+
+ ##############################################
+ # Override
+ ##############################################
+
+ # Convert a string
+ def __call__(self, text):
+ """
+ Convert a string to a ESN input
+ :param text: Text to convert
+ :return: Tensor of word vectors
+ """
+ # Inputs as tensor
+ inputs = torch.FloatTensor(1, self.input_dim)
+
+ # Null symbol
+ null_symbol = torch.zeros(1, self.input_dim)
+ null_symbol[0, -1] = 1.0
+
+ # Start
+ start = True
+
+ # For each tokens
+ for token in self.nlp(text):
+ # Replace if not function word
+ if token.text not in self.symbols:
+ token_fw = u"X"
+ else:
+ token_fw = token.text
+ # end if
+
+ # Get tag
+ fw = self.tag_to_symbol(token_fw)
+
+ # Add
+ if not start:
+ inputs = torch.cat((inputs, fw), dim=0)
+ else:
+ inputs = fw
+ start = False
+ # end if
+ # end for
+
+ return inputs, inputs.size()[0]
+ # end convert
+
+# end FunctionWord
diff --git a/ESN/EchoTorch-master/echotorch/transforms/text/GensimModel.py b/ESN/EchoTorch-master/echotorch/transforms/text/GensimModel.py
new file mode 100644
index 0000000..d2f48ab
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/transforms/text/GensimModel.py
@@ -0,0 +1,111 @@
+# -*- coding: utf-8 -*-
+#
+
+# Imports
+import gensim
+from gensim.utils import tokenize
+import torch
+import numpy as np
+
+
+# Transform text to vectors with a Gensim model
+class GensimModel(object):
+ """
+ Transform text to vectors with a Gensim model
+ """
+
+ # Constructor
+ def __init__(self, model_path):
+ """
+ Constructor
+ :param model_path: Model's path.
+ """
+ # Properties
+ self.model_path = model_path
+
+ # Format
+ binary = False if model_path[-4:] == ".vec" else True
+
+ # Load
+ self.model = gensim.models.KeyedVectors.load_word2vec_format(model_path, binary=binary, unicode_errors='ignore')
+
+ # OOV
+ self.oov = 0.0
+ # end __init__
+
+ ##############################################
+ # Properties
+ ##############################################
+
+ # Get the number of inputs
+ @property
+ def input_dim(self):
+ """
+ Get the number of inputs.
+ :return: The input size.
+ """
+ return 300
+ # end input_dim
+
+ ##############################################
+ # Override
+ ##############################################
+
+ # Convert a string
+ def __call__(self, text):
+ """
+ Convert a string to a ESN input
+ :param text: Text to convert
+ :return: Tensor of word vectors
+ """
+ # Inputs as tensor
+ inputs = torch.FloatTensor(1, self.input_dim)
+
+ # Start
+ start = True
+ count = 0.0
+
+ # OOV
+ zero = 0.0
+ self.oov = 0.0
+
+ # For each tokens
+ for token in tokenize(text):
+ found = False
+ # Try normal
+ try:
+ word_vector = self.model[token]
+ found = True
+ except KeyError:
+ pass
+ # end try
+
+ # Try lower
+ if not found:
+ try:
+ word_vector = self.model[token.lower()]
+ except KeyError:
+ zero += 1.0
+ word_vector = np.zeros(self.input_dim)
+ # end try
+ # end if
+
+ # Start/continue
+ if not start:
+ inputs = torch.cat((inputs, torch.FloatTensor(word_vector).unsqueeze_(0)), dim=0)
+ else:
+ inputs = torch.FloatTensor(word_vector).unsqueeze_(0)
+ start = False
+ # end if
+ count += 1
+ # end for
+
+ # OOV
+ self.oov = zero / count * 100.0
+
+ return inputs, inputs.size()[0]
+ # end convert
+
+ ##############################################
+ # Static
+ #########################################
\ No newline at end of file
diff --git a/ESN/EchoTorch-master/echotorch/transforms/text/GloveVector.py b/ESN/EchoTorch-master/echotorch/transforms/text/GloveVector.py
new file mode 100644
index 0000000..c94b6fd
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/transforms/text/GloveVector.py
@@ -0,0 +1,89 @@
+# -*- coding: utf-8 -*-
+#
+
+# Imports
+import torch
+import spacy
+import numpy as np
+from datetime import datetime
+
+
+# Transform text to word vectors
+class GloveVector(object):
+ """
+ Transform text to word vectors
+ """
+
+ # Constructor
+ def __init__(self, model="en_vectors_web_lg"):
+ """
+ Constructor
+ :param model: Spacy's model to load.
+ """
+ # Properties
+ self.model = model
+ self.nlp = spacy.load(model)
+ self.oov = 0.0
+ # end __init__
+
+ ##############################################
+ # Properties
+ ##############################################
+
+ # Get the number of inputs
+ @property
+ def input_dim(self):
+ """
+ Get the number of inputs.
+ :return: The input size.
+ """
+ return 300
+ # end input_dim
+
+ ##############################################
+ # Override
+ ##############################################
+
+ # Convert a string
+ def __call__(self, text):
+ """
+ Convert a string to a ESN input
+ :param text: Text to convert
+ :return: Tensor of word vectors
+ """
+ # Inputs as tensor
+ inputs = torch.FloatTensor(1, self.input_dim)
+
+ # Start
+ start = True
+ count = 0.0
+
+ # Zero count
+ zero = 0.0
+ self.oov = 0.0
+
+ # For each tokens
+ for token in self.nlp(text):
+ if np.sum(token.vector) == 0:
+ zero += 1.0
+ # end if
+ if not start:
+ inputs = torch.cat((inputs, torch.FloatTensor(token.vector).unsqueeze_(0)), dim=0)
+ else:
+ inputs = torch.FloatTensor(token.vector).unsqueeze_(0)
+ start = False
+ # end if
+ count += 1.0
+ # end for
+
+ # OOV
+ self.oov = zero / count * 100.0
+
+ return inputs, inputs.size()[0]
+ # end convert
+
+ ##############################################
+ # Static
+ ##############################################
+
+# end GloveVector
diff --git a/ESN/EchoTorch-master/echotorch/transforms/text/PartOfSpeech.py b/ESN/EchoTorch-master/echotorch/transforms/text/PartOfSpeech.py
new file mode 100644
index 0000000..35109c6
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/transforms/text/PartOfSpeech.py
@@ -0,0 +1,76 @@
+# -*- coding: utf-8 -*-
+#
+
+# Imports
+import torch
+import spacy
+from .Transformer import Transformer
+
+
+# Transform text to part-of-speech vectors
+class PartOfSpeech(Transformer):
+ """
+ Transform text to part-of-speech vectors
+ """
+
+ # Constructor
+ def __init__(self, model="en_core_web_lg"):
+ """
+ Constructor
+ :param model: Spacy's model to load.
+ """
+ # Super constructor
+ super(PartOfSpeech, self).__init__()
+
+ # Properties
+ self.model = model
+ self.nlp = spacy.load(model)
+ # end __init__
+
+ ##############################################
+ # Public
+ ##############################################
+
+ # Get tags
+ def get_tags(self):
+ """
+ Get tags.
+ :return: A list of tags.
+ """
+ return [u"ADJ", u"ADP", u"ADV", u"CCONJ", u"DET", u"INTJ", u"NOUN", u"NUM", u"PART", u"PRON", u"PROPN",
+ u"PUNCT", u"SYM", u"VERB", u"SPACE", u"X"]
+ # end get_tags
+
+ ##############################################
+ # Override
+ ##############################################
+
+ # Convert a string
+ def __call__(self, text):
+ """
+ Convert a string to a ESN input
+ :param text: Text to convert
+ :return: Tensor of word vectors
+ """
+ # Inputs as tensor
+ inputs = torch.FloatTensor(1, self.input_dim)
+
+ # Start
+ start = True
+
+ # For each tokens
+ for token in self.nlp(text):
+ pos = self.tag_to_symbol(token.pos_)
+
+ if not start:
+ inputs = torch.cat((inputs, pos), dim=0)
+ else:
+ inputs = pos
+ start = False
+ # end if
+ # end for
+
+ return inputs, inputs.size()[0]
+ # end convert
+
+# end PartOfSpeech
diff --git a/ESN/EchoTorch-master/echotorch/transforms/text/Tag.py b/ESN/EchoTorch-master/echotorch/transforms/text/Tag.py
new file mode 100644
index 0000000..86bb13c
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/transforms/text/Tag.py
@@ -0,0 +1,91 @@
+# -*- coding: utf-8 -*-
+#
+
+# Imports
+import torch
+import spacy
+from .Transformer import Transformer
+
+
+# Transform text to tag vectors
+class Tag(Transformer):
+ """
+ Transform text to tag vectors
+ """
+
+ # Constructor
+ def __init__(self, model="en_core_web_lg"):
+ """
+ Constructor
+ :param model: Spacy's model to load.
+ """
+ # Super constructor
+ super(Tag, self).__init__()
+
+ # Properties
+ self.model = model
+ self.nlp = spacy.load(model)
+ # end __init__
+
+ ##############################################
+ # Public
+ ##############################################
+
+ # Get tags
+ def get_tags(self):
+ """
+ Get all tags.
+ :return: A list of tags.
+ """
+ return [u"''", u",", u":", u".", u"``", u"-LRB-", u"-RRB-", u"AFX", u"CC", u"CD", u"DT", u"EX", u"FW",
+ u"IN", u"JJ", u"JJR", u"JJS", u"LS", u"MD", u"NN", u"NNS", u"NNP", u"NNPS", u"PDT", u"POS", u"PRP",
+ u"PRP$", u"RB", u"RBR", u"RBS", u"RP", u"SYM", u"TO", u"UH", u"VB", u"VBZ", u"VBP", u"VBD", u"VBN",
+ u"VBG", u"WDT", u"WP", u"WP$", u"WRB", u"X"]
+ # end get_tags
+
+ ##############################################
+ # Override
+ ##############################################
+
+ # Convert a string
+ def __call__(self, text):
+ """
+ Convert a string to a ESN input
+ :param text: Text to convert
+ :return: Tensor of word vectors
+ """
+ # Inputs as tensor
+ inputs = torch.FloatTensor(1, self.input_dim)
+
+ # Null symbol
+ null_symbol = torch.zeros(1, self.input_dim)
+ null_symbol[0, -1] = 1.0
+
+ # Start
+ start = True
+
+ # For each tokens
+ for token in self.nlp(text):
+ # Replace if not function word
+ if token.tag_ not in self.symbols:
+ token_tag = u"X"
+ else:
+ token_tag = token.tag_
+ # end if
+
+ # Get tag
+ tag = self.tag_to_symbol(token_tag)
+
+ # Add
+ if not start:
+ inputs = torch.cat((inputs, tag), dim=0)
+ else:
+ inputs = tag
+ start = False
+ # end if
+ # end for
+
+ return inputs, inputs.size()[0]
+ # end convert
+
+# end FunctionWord
diff --git a/ESN/EchoTorch-master/echotorch/transforms/text/Token.py b/ESN/EchoTorch-master/echotorch/transforms/text/Token.py
new file mode 100644
index 0000000..9c2314e
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/transforms/text/Token.py
@@ -0,0 +1,78 @@
+# -*- coding: utf-8 -*-
+#
+
+# Imports
+import spacy
+
+
+# Transform text to a list of tokens
+class Token(object):
+ """
+ Transform text to a list of tokens
+ """
+
+ # Constructor
+ def __init__(self, model="en_core_web_lg"):
+ """
+ Constructor
+ :param model: Spacy's model to load.
+ """
+ # Properties
+ self.model = model
+ self.nlp = spacy.load(model)
+ # end __init__
+
+ ##############################################
+ # Properties
+ ##############################################
+
+ # Get the number of inputs
+ @property
+ def input_dim(self):
+ """
+ Get the number of inputs.
+ :return: The input size.
+ """
+ return 1
+ # end input_dim
+
+ ##############################################
+ # Override
+ ##############################################
+
+ # Convert a string
+ def __call__(self, text):
+ """
+ Convert a string to a ESN input
+ :param text: Text to convert
+ :return: Tensor of word vectors
+ """
+ # Inputs as a list
+ tokens = list()
+
+ # For each tokens
+ for token in self.nlp(text):
+ tokens.append(unicode(token.text))
+ # end for
+
+ return tokens, len(tokens)
+ # end convert
+
+ ##############################################
+ # Private
+ ##############################################
+
+ # Get inputs size
+ def _get_inputs_size(self):
+ """
+ Get inputs size.
+ :return:
+ """
+ return 1
+ # end if
+
+ ##############################################
+ # Static
+ ##############################################
+
+# end Token
diff --git a/ESN/EchoTorch-master/echotorch/transforms/text/Transformer.py b/ESN/EchoTorch-master/echotorch/transforms/text/Transformer.py
new file mode 100644
index 0000000..1e167b0
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/transforms/text/Transformer.py
@@ -0,0 +1,94 @@
+# -*- coding: utf-8 -*-
+#
+
+# Imports
+import torch
+
+
+# Base class for text transformers
+class Transformer(object):
+ """
+ Base class for text transformers
+ """
+
+ # Constructor
+ def __init__(self):
+ """
+ Constructor
+ """
+ # Properties
+ self.symbols = self.generate_symbols()
+ # end __init__
+
+ ##############################################
+ # Properties
+ ##############################################
+
+ # Get the number of inputs
+ @property
+ def input_dim(self):
+ """
+ Get the number of inputs.
+ :return: The input size.
+ """
+ return len(self.get_tags())
+ # end input_dim
+
+ ##############################################
+ # Public
+ ##############################################
+
+ # Get tags
+ def get_tags(self):
+ """
+ Get tags.
+ :return: A list of tags.
+ """
+ return []
+ # end get_tags
+
+ # Get symbol from tag
+ def tag_to_symbol(self, tag):
+ """
+ Get symbol from tag.
+ :param tag: Tag.
+ :return: The corresponding symbols.
+ """
+ if tag in self.symbols.keys():
+ return self.symbols[tag]
+ return None
+ # end word_to_symbol
+
+ # Generate symbols
+ def generate_symbols(self):
+ """
+ Generate word symbols.
+ :return: Dictionary of tag to symbols.
+ """
+ result = dict()
+ for index, p in enumerate(self.get_tags()):
+ result[p] = torch.zeros(1, self.input_dim)
+ result[p][0, index] = 1.0
+ # end for
+ return result
+ # end generate_symbols
+
+ ##############################################
+ # Override
+ ##############################################
+
+ # Convert a string
+ def __call__(self, tokens):
+ """
+ Convert a string to a ESN input
+ :param tokens: Text to convert
+ :return: A list of symbols
+ """
+ pass
+ # end convert
+
+ ##############################################
+ # Static
+ ##############################################
+
+# end TextTransformer
diff --git a/ESN/EchoTorch-master/echotorch/transforms/text/__init__.py b/ESN/EchoTorch-master/echotorch/transforms/text/__init__.py
new file mode 100644
index 0000000..08fed1c
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/transforms/text/__init__.py
@@ -0,0 +1,21 @@
+# -*- coding: utf-8 -*-
+#
+
+# Imports
+from .Character import Character
+from .Character2Gram import Character2Gram
+from .Character3Gram import Character3Gram
+from .Compose import Compose
+from .Embedding import Embedding
+from .FunctionWord import FunctionWord
+from .GensimModel import GensimModel
+from .GloveVector import GloveVector
+from .PartOfSpeech import PartOfSpeech
+from .Tag import Tag
+from .Token import Token
+from .Transformer import Transformer
+
+__all__ = [
+ 'Character', 'Character2Gram', 'Character3Gram', 'Compose', 'Embedding', 'FunctionWord', 'GensimModel', 'Transformer', 'GloveVector',
+ 'PartOfSpeech', 'Tag', 'Token'
+]
diff --git a/ESN/EchoTorch-master/echotorch/utils/__init__.py b/ESN/EchoTorch-master/echotorch/utils/__init__.py
new file mode 100644
index 0000000..1699038
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/utils/__init__.py
@@ -0,0 +1,11 @@
+# -*- coding: utf-8 -*-
+#
+
+# Imports
+from .error_measures import nrmse, nmse, rmse, mse, perplexity, cumperplexity
+from .utility_functions import spectral_radius, deep_spectral_radius, normalize, average_prob, max_average_through_time
+
+__all__ = [
+ 'nrmse', 'nmse', 'rmse', 'mse', 'perplexity', 'cumperplexity', 'spectral_radius', 'deep_spectral_radius',
+ 'normalize', 'average_prob', 'max_average_through_time'
+]
diff --git a/ESN/EchoTorch-master/echotorch/utils/error_measures.py b/ESN/EchoTorch-master/echotorch/utils/error_measures.py
new file mode 100644
index 0000000..d129dca
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/utils/error_measures.py
@@ -0,0 +1,165 @@
+# -*- coding: utf-8 -*-
+#
+
+# Imports
+import torch
+import math
+from decimal import Decimal
+import numpy as np
+
+
+# Normalized root-mean-square error
+def nrmse(outputs, targets):
+ """
+ Normalized root-mean square error
+ :param outputs: Module's outputs
+ :param targets: Target signal to be learned
+ :return: Normalized root-mean square deviation
+ """
+ # Flatten tensors
+ outputs = outputs.view(outputs.nelement())
+ targets = targets.view(targets.nelement())
+
+ # Check dim
+ if outputs.size() != targets.size():
+ raise ValueError(u"Ouputs and targets tensors don have the same number of elements")
+ # end if
+
+ # Normalization with N-1
+ var = torch.std(targets) ** 2
+
+ # Error
+ error = (targets - outputs) ** 2
+
+ # Return
+ return float(math.sqrt(torch.mean(error) / var))
+# end nrmse
+
+
+# Root-mean square error
+def rmse(outputs, targets):
+ """
+ Root-mean square error
+ :param outputs: Module's outputs
+ :param targets: Target signal to be learned
+ :return: Root-mean square deviation
+ """
+ # Flatten tensors
+ outputs = outputs.view(outputs.nelement())
+ targets = targets.view(targets.nelement())
+
+ # Check dim
+ if outputs.size() != targets.size():
+ raise ValueError(u"Ouputs and targets tensors don have the same number of elements")
+ # end if
+
+ # Error
+ error = (targets - outputs) ** 2
+
+ # Return
+ return float(math.sqrt(torch.mean(error)))
+# end rmsd
+
+
+# Mean square error
+def mse(outputs, targets):
+ """
+ Mean square error
+ :param outputs: Module's outputs
+ :param targets: Target signal to be learned
+ :return: Mean square deviation
+ """
+ # Flatten tensors
+ outputs = outputs.view(outputs.nelement())
+ targets = targets.view(targets.nelement())
+
+ # Check dim
+ if outputs.size() != targets.size():
+ raise ValueError(u"Ouputs and targets tensors don have the same number of elements")
+ # end if
+
+ # Error
+ error = (targets - outputs) ** 2
+
+ # Return
+ return float(torch.mean(error))
+# end mse
+
+
+# Normalized mean square error
+def nmse(outputs, targets):
+ """
+ Normalized mean square error
+ :param outputs: Module's output
+ :param targets: Target signal to be learned
+ :return: Normalized mean square deviation
+ """
+ # Flatten tensors
+ outputs = outputs.view(outputs.nelement())
+ targets = targets.view(targets.nelement())
+
+ # Check dim
+ if outputs.size() != targets.size():
+ raise ValueError(u"Ouputs and targets tensors don have the same number of elements")
+ # end if
+
+ # Normalization with N-1
+ var = torch.std(targets) ** 2
+
+ # Error
+ error = (targets - outputs) ** 2
+
+ # Return
+ return float(torch.mean(error) / var)
+# end nmse
+
+
+# Perplexity
+def perplexity(output_probs, targets, log=False):
+ """
+ Perplexity
+ :param output_probs: Output probabilities for each word/tokens (length x n_tokens)
+ :param targets: Real word index
+ :return: Perplexity
+ """
+ pp = Decimal(1.0)
+ e_vec = torch.FloatTensor(output_probs.size(0), output_probs.size(1)).fill_(np.e)
+ if log:
+ set_p = 1.0 / torch.gather(torch.pow(e_vec, exponent=output_probs.data.cpu()), 1,
+ targets.data.cpu().unsqueeze(1))
+ else:
+ set_p = 1.0 / torch.gather(output_probs.data.cpu(), 1, targets.data.cpu().unsqueeze(1))
+ # end if
+ for j in range(set_p.size(0)):
+ pp *= Decimal(set_p[j][0])
+ # end for
+ return pp
+# end perplexity
+
+
+# Cumulative perplexity
+def cumperplexity(output_probs, targets, log=False):
+ """
+ Cumulative perplexity
+ :param output_probs:
+ :param targets:
+ :param log:
+ :return:
+ """
+ # Get prob of test events
+ set_p = torch.gather(output_probs, 1, targets.unsqueeze(1))
+
+ # Make sure it's log
+ if not log:
+ set_p = torch.log(set_p)
+ # end if
+
+ # Log2
+ set_log = set_p / np.log(2)
+
+ # sum log
+ sum_log = torch.sum(set_log)
+
+ # Return
+ return sum_log
+# end cumperplexity
diff --git a/ESN/EchoTorch-master/echotorch/utils/utility_functions.py b/ESN/EchoTorch-master/echotorch/utils/utility_functions.py
new file mode 100644
index 0000000..6d0bb79
--- /dev/null
+++ b/ESN/EchoTorch-master/echotorch/utils/utility_functions.py
@@ -0,0 +1,64 @@
+# -*- coding: utf-8 -*-
+#
+
+# Imports
+import torch
+
+
+# Compute spectral radius of a square 2-D tensor
+def spectral_radius(m):
+ """
+ Compute spectral radius of a square 2-D tensor
+ :param m: squared 2D tensor
+ :return:
+ """
+ return torch.max(torch.abs(torch.eig(m)[0]))
+# end spectral_radius
+
+
+# Compute spectral radius of a square 2-D tensor for stacked-ESN
+def deep_spectral_radius(m, leaky_rate):
+ """
+ Compute spectral radius of a square 2-D tensor for stacked-ESN
+ :param m: squared 2D tensor
+ :param leaky_rate: Layer's leaky rate
+ :return:
+ """
+ return spectral_radius((1.0 - leaky_rate) * torch.eye(m.size(0), m.size(0)) + leaky_rate * m)
+# end spectral_radius
+
+
+# Normalize a tensor on a single dimension
+def normalize(tensor, dim=1):
+ """
+ Normalize a tensor on a single dimension
+ :param t:
+ :return:
+ """
+ pass
+# end normalize
+
+
+# Average probabilties through time
+def average_prob(tensor, dim=0):
+ """
+ Average probabilities through time
+ :param tensor:
+ :param dim:
+ :return:
+ """
+ return torch.mean(tensor, dim=dim)
+# end average_prob
+
+
+# Max average through time
+def max_average_through_time(tensor, dim=0):
+ """
+ Max average through time
+ :param tensor:
+ :param dim: Time dimension
+ :return:
+ """
+ average = torch.mean(tensor, dim=dim)
+ return torch.max(average, dim=dim)[1]
+# end max_average_through_time
diff --git a/ESN/EchoTorch-master/examples/MNIST/convert_images.py b/ESN/EchoTorch-master/examples/MNIST/convert_images.py
new file mode 100644
index 0000000..2eeca66
--- /dev/null
+++ b/ESN/EchoTorch-master/examples/MNIST/convert_images.py
@@ -0,0 +1,37 @@
+# -*- coding: utf-8 -*-
+#
+# File : examples/MNIST/convert_images.py
+# Description : Convert images to time series.
+# Date : 6th of April, 2017
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti
+
+"""
+Created on 6 April 2017
+@author: Nils Schaetti
+"""
+
+import sys
+import os
+sys.path.insert(0, os.path.abspath('./../..'))
+import echotorch
+
+
+if __name__ == "__main__":
+
+ converter = echotorch.dataset.ImageConverter()
+
+# end if
diff --git a/ESN/EchoTorch-master/examples/datasets/logistic_map.py b/ESN/EchoTorch-master/examples/datasets/logistic_map.py
new file mode 100644
index 0000000..b31818a
--- /dev/null
+++ b/ESN/EchoTorch-master/examples/datasets/logistic_map.py
@@ -0,0 +1,18 @@
+# -*- coding: utf-8 -*-
+#
+
+# Imports
+import echotorch.datasets
+from torch.utils.data.dataloader import DataLoader
+
+
+# Logistoc map dataset
+log_map = echotorch.datasets.LogisticMapDataset(10000, 10)
+
+# Dataset
+log_map_dataset = DataLoader(log_map, batch_size=10, shuffle=True)
+
+# For each sample
+for data in log_map_dataset:
+ print(data[0])
+# end for
diff --git a/ESN/EchoTorch-master/examples/generation/narma10_esn_feedbacks.py b/ESN/EchoTorch-master/examples/generation/narma10_esn_feedbacks.py
new file mode 100644
index 0000000..9fca247
--- /dev/null
+++ b/ESN/EchoTorch-master/examples/generation/narma10_esn_feedbacks.py
@@ -0,0 +1,103 @@
+# -*- coding: utf-8 -*-
+#
+# File : examples/timeserie_prediction/switch_attractor_esn
+# Description : NARMA 30 prediction with ESN.
+# Date : 26th of January, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti
+
+
+# Imports
+import torch
+from echotorch.datasets.NARMADataset import NARMADataset
+import echotorch.nn as etnn
+import echotorch.utils
+from torch.autograd import Variable
+from torch.utils.data.dataloader import DataLoader
+import numpy as np
+import mdp
+
+# Dataset params
+train_sample_length = 5000
+test_sample_length = 1000
+n_train_samples = 1
+n_test_samples = 1
+batch_size = 1
+spectral_radius = 0.9
+leaky_rate = 1.0
+input_dim = 1
+n_hidden = 100
+
+# Use CUDA?
+use_cuda = False
+use_cuda = torch.cuda.is_available() if use_cuda else False
+
+# Manual seed
+mdp.numx.random.seed(1)
+np.random.seed(2)
+torch.manual_seed(1)
+
+# NARMA30 dataset
+narma10_train_dataset = NARMADataset(train_sample_length, n_train_samples, system_order=10, seed=1)
+narma10_test_dataset = NARMADataset(test_sample_length, n_test_samples, system_order=10, seed=10)
+
+# Data loader
+trainloader = DataLoader(narma10_train_dataset, batch_size=batch_size, shuffle=False, num_workers=2)
+testloader = DataLoader(narma10_test_dataset, batch_size=batch_size, shuffle=False, num_workers=2)
+
+# ESN cell
+esn = etnn.ESN(
+ input_dim=input_dim,
+ hidden_dim=n_hidden,
+ output_dim=1,
+ spectral_radius=spectral_radius,
+ learning_algo='inv',
+ # leaky_rate=leaky_rate,
+ feedbacks=True
+)
+if use_cuda:
+ esn.cuda()
+# end if
+
+# For each batch
+for data in trainloader:
+ # Inputs and outputs
+ inputs, targets = data
+
+ # To variable
+ inputs, targets = Variable(inputs), Variable(targets)
+ if use_cuda: inputs, targets = inputs.cuda(), targets.cuda()
+
+ # Accumulate xTx and xTy
+ esn(inputs, targets)
+# end for
+
+# Finalize training
+esn.finalize()
+
+# Test MSE
+dataiter = iter(testloader)
+test_u, test_y = dataiter.next()
+test_u, test_y = Variable(test_u), Variable(test_y)
+gen_u = Variable(torch.zeros(batch_size, test_sample_length, input_dim))
+if use_cuda: test_u, test_y, gen_u = test_u.cuda(), test_y.cuda(), gen_u.cuda()
+y_predicted = esn(test_u)
+print(u"Test MSE: {}".format(echotorch.utils.mse(y_predicted.data, test_y.data)))
+print(u"Test NRMSE: {}".format(echotorch.utils.nrmse(y_predicted.data, test_y.data)))
+print(u"")
+
+y_generated = esn(gen_u)
+print(y_generated)
diff --git a/ESN/EchoTorch-master/examples/memory/memtest.py b/ESN/EchoTorch-master/examples/memory/memtest.py
new file mode 100644
index 0000000..33adaee
--- /dev/null
+++ b/ESN/EchoTorch-master/examples/memory/memtest.py
@@ -0,0 +1,74 @@
+# -*- coding: utf-8 -*-
+#
+# File : examples/SwitchAttractor/switch_attractor_esn
+# Description : Attractor switching task with ESN.
+# Date : 26th of January, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti
+
+
+# Imports
+import torch
+from EchoTorch.datasets.MemTestDataset import MemTestDataset
+import EchoTorch.nn as etnn
+import torch.nn as nn
+from torch.autograd import Variable
+from torch.utils.data.dataloader import DataLoader
+import matplotlib.pyplot as plt
+
+# Dataset params
+sample_length = 20
+n_samples = 2
+batch_size = 5
+
+# MemTest dataset
+memtest_dataset = MemTestDataset(sample_length, n_samples, seed=1)
+
+# Data loader
+dataloader = DataLoader(memtest_dataset, batch_size=batch_size, shuffle=False, num_workers=2)
+
+# ESN properties
+input_dim = 1
+n_hidden = 20
+
+# ESN cell
+esn = etnn.ESNCell(input_dim, n_hidden)
+
+# Linear layer
+linear = nn.Linear(n_hidden, 1)
+
+# Objective function
+criterion = nn.MSELoss()
+
+# Learning rate
+learning_rate = 0.0001
+
+# Number of iterations
+n_iterations = 10
+
+for data in dataloader:
+ # For each sample
+ for i_sample in range(data[0].size()[0]):
+ # Inputs and outputs
+ inputs, outputs = data[0][i_sample], data[1][i_sample]
+ inputs, outputs = Variable(inputs), Variable(outputs)
+
+ # Show the graph
+ plt.plot(inputs.data.numpy(), c='b')
+ plt.plot(outputs.data[:, 9].numpy(), c='r')
+ plt.show()
+ # end for
+# end for
\ No newline at end of file
diff --git a/ESN/EchoTorch-master/examples/models/NilsNet_example.py b/ESN/EchoTorch-master/examples/models/NilsNet_example.py
new file mode 100644
index 0000000..7944647
--- /dev/null
+++ b/ESN/EchoTorch-master/examples/models/NilsNet_example.py
@@ -0,0 +1,92 @@
+# -*- coding: utf-8 -*-
+#
+# File : examples/timeserie_prediction/switch_attractor_esn
+# Description : NARMA 30 prediction with ESN.
+# Date : 26th of January, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti
+
+
+# Imports
+import torch
+import echotorch.utils
+from torchvision import datasets, transforms
+import matplotlib.pyplot as plt
+import numpy as np
+import os
+from torch.autograd import Variable
+
+
+def imshow(inp, title=None):
+ """Imshow for Tensor."""
+ inp = inp.numpy().transpose((1, 2, 0))
+ mean = np.array([0.485, 0.456, 0.406])
+ std = np.array([0.229, 0.224, 0.225])
+ inp = std * inp + mean
+ inp = np.clip(inp, 0, 1)
+ plt.imshow(inp)
+ if title is not None:
+ plt.title(title)
+ plt.show()
+# end imshow
+
+# Data augmentation and normalization for training
+# Just normalization for validation
+data_transforms = {
+ 'train': transforms.Compose([
+ transforms.RandomResizedCrop(224),
+ transforms.RandomHorizontalFlip(),
+ transforms.ToTensor(),
+ transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
+ ]),
+ 'val': transforms.Compose([
+ transforms.Resize(256),
+ transforms.CenterCrop(224),
+ transforms.ToTensor(),
+ transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
+ ]),
+}
+
+data_dir = 'hymenoptera_data'
+image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
+ data_transforms[x])
+ for x in ['train', 'val']}
+dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4,
+ shuffle=True, num_workers=4)
+ for x in ['train', 'val']}
+dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}
+class_names = image_datasets['train'].classes
+
+# Create a NilsNet
+nilsnet = echotorch.models.NilsNet(reservoir_dim=1000, sfa_dim=100, ica_dim=100)
+
+# Get a batch of training data
+inputs, classes = next(iter(dataloaders['train']))
+print(inputs.size())
+print(classes.size())
+
+inputs = Variable(inputs)
+classes = Variable(classes)
+
+# Make a grid from batch
+# out = torchvision.utils.make_grid(inputs)
+
+# imshow(out, title=[class_names[x] for x in classes])
+
+outputs = nilsnet(inputs)
+
+print(outputs)
+print(outputs.size())
\ No newline at end of file
diff --git a/ESN/EchoTorch-master/examples/nodes/pca_tests.py b/ESN/EchoTorch-master/examples/nodes/pca_tests.py
new file mode 100644
index 0000000..1fd4ed6
--- /dev/null
+++ b/ESN/EchoTorch-master/examples/nodes/pca_tests.py
@@ -0,0 +1,63 @@
+# -*- coding: utf-8 -*-
+#
+# File : examples/timeserie_prediction/switch_attractor_esn
+# Description : NARMA 30 prediction with ESN.
+# Date : 26th of January, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti
+
+
+# Imports
+import torch
+import echotorch.nn as etnn
+from torch.autograd import Variable
+import mdp
+
+
+# Settings
+input_dim = 10
+output_dim = 3
+tlen = 500
+
+# Generate
+training_samples = torch.randn(1, tlen, input_dim)
+test_samples = torch.randn(1, tlen, input_dim)
+
+# Generate
+training_samples_np = training_samples[0].numpy()
+test_samples_np = test_samples[0].numpy()
+
+# Show
+print(u"Training samples : {}".format(training_samples_np))
+print(u"Test samples : {}".format(test_samples_np))
+
+# PCA node
+mdp_pca_node = mdp.Flow([mdp.nodes.PCANode(input_dim=input_dim, output_dim=output_dim)])
+mdp_pca_node.train(training_samples_np)
+pca_reduced = mdp_pca_node(test_samples_np)
+
+# Show
+print(u"PCA reduced : {}".format(pca_reduced))
+
+# EchoTorch PCA node
+et_pca_node = etnn.PCACell(input_dim=input_dim, output_dim=output_dim)
+et_pca_node(Variable(training_samples))
+et_pca_node.finalize()
+et_reduced = et_pca_node(Variable(test_samples))
+
+# Show
+print(u"Reduced with EchoTorch/PCA :")
+print(et_reduced)
diff --git a/ESN/EchoTorch-master/examples/switch_attractor/switch_attractor_esn.py b/ESN/EchoTorch-master/examples/switch_attractor/switch_attractor_esn.py
new file mode 100644
index 0000000..6127954
--- /dev/null
+++ b/ESN/EchoTorch-master/examples/switch_attractor/switch_attractor_esn.py
@@ -0,0 +1,98 @@
+# -*- coding: utf-8 -*-
+#
+# File : examples/timeserie_prediction/switch_attractor_esn
+# Description : NARMA 30 prediction with ESN.
+# Date : 26th of January, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti
+
+
+# Imports
+import torch
+from echotorch.datasets.SwitchAttractorDataset import SwitchAttractorDataset
+import echotorch.nn as etnn
+import echotorch.utils
+from torch.autograd import Variable
+from torch.utils.data.dataloader import DataLoader
+import numpy as np
+import mdp
+import matplotlib.pyplot as plt
+
+# Dataset params
+train_sample_length = 1000
+test_sample_length = 1000
+n_train_samples = 40
+n_test_samples = 10
+batch_size = 1
+spectral_radius = 0.9
+leaky_rate = 1.0
+input_dim = 1
+n_hidden = 100
+
+# Use CUDA?
+use_cuda = False
+use_cuda = torch.cuda.is_available() if use_cuda else False
+
+# Manual seed
+mdp.numx.random.seed(1)
+np.random.seed(2)
+torch.manual_seed(1)
+
+# Switch attractor dataset
+switch_train_dataset = SwitchAttractorDataset(train_sample_length, n_train_samples, seed=1)
+switch_test_dataset = SwitchAttractorDataset(test_sample_length, n_test_samples, seed=10)
+
+# Data loader
+trainloader = DataLoader(switch_train_dataset, batch_size=batch_size, shuffle=False, num_workers=2)
+testloader = DataLoader(switch_test_dataset, batch_size=batch_size, shuffle=False, num_workers=2)
+
+# ESN cell
+esn = etnn.LiESN(input_dim=input_dim, hidden_dim=n_hidden, output_dim=1, spectral_radius=spectral_radius,
+ learning_algo='inv', leaky_rate=leaky_rate, feedbacks=True)
+if use_cuda:
+ esn.cuda()
+# end if
+
+# For each batch
+for data in trainloader:
+ # Inputs and outputs
+ inputs, targets = data
+
+ # To variable
+ inputs, targets = Variable(inputs), Variable(targets)
+ if use_cuda: inputs, targets = inputs.cuda(), targets.cuda()
+ # plt.plot(targets.data[0].numpy(), c='b')
+ # plt.plot(y_predicted.data[0, :, 0].numpy(), c='r')
+ # plt.show()
+ # Accumulate xTx and xTy
+ esn(inputs, targets)
+# end for
+
+# Finalize training
+esn.finalize()
+
+# For each batch
+for data in testloader:
+ # Test MSE
+ test_u, test_y = data
+ test_u, test_y = Variable(test_u), Variable(test_y)
+ if use_cuda: test_u, test_y = test_u.cuda(), test_y.cuda()
+ y_predicted = esn(test_u)
+ plt.ylim(ymax=10)
+ plt.plot(test_y.data[0].numpy(), c='b')
+ plt.plot(y_predicted.data[0, :, 0].numpy(), c='r')
+ plt.show()
+# end for
diff --git a/ESN/EchoTorch-master/examples/timeserie_prediction/mackey_glass_esn.py b/ESN/EchoTorch-master/examples/timeserie_prediction/mackey_glass_esn.py
new file mode 100644
index 0000000..48375fb
--- /dev/null
+++ b/ESN/EchoTorch-master/examples/timeserie_prediction/mackey_glass_esn.py
@@ -0,0 +1,118 @@
+# -*- coding: utf-8 -*-
+#
+# File : examples/SwitchAttractor/switch_attractor_esn
+# Description : Attractor switching task with ESN.
+# Date : 26th of January, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti
+
+
+# Imports
+import torch
+from echotorch.datasets.MackeyGlassDataset import MackeyGlassDataset
+import echotorch.nn as etnn
+import torch.nn as nn
+from torch.autograd import Variable
+from torch.utils.data.dataloader import DataLoader
+import matplotlib.pyplot as plt
+
+# Dataset params
+sample_length = 1000
+n_samples = 40
+batch_size = 5
+
+# Mackey glass dataset
+mackey_glass_dataset = MackeyGlassDataset(sample_length, n_samples, tau=30)
+
+# Data loader
+dataloader = DataLoader(mackey_glass_dataset, batch_size=5, shuffle=False, num_workers=2)
+
+# ESN properties
+input_dim = 1
+n_hidden = 20
+
+# ESN cell
+esn = etnn.ESNCell(input_dim, n_hidden)
+
+# Linear layer
+linear = nn.Linear(n_hidden, 1)
+
+# Objective function
+criterion = nn.MSELoss()
+
+# Learning rate
+learning_rate = 0.0001
+
+# Number of iterations
+n_iterations = 10
+
+# For each iterations
+for i_iter in range(n_iterations):
+ # Iterate through batches
+ for i_batch, sample_batched in enumerate(dataloader):
+ # For each sample
+ for i_sample in range(sample_batched.size()[0]):
+ # Inputs and outputs
+ inputs = Variable(sample_batched[i_sample][:-1], requires_grad=False)
+ outputs = Variable(sample_batched[i_sample][1:], requires_grad=False)
+ esn_outputs = torch.zeros(sample_length-1, 1)
+ gradients = torch.zeros(sample_length-1, 1)
+
+ # Init hidden
+ hidden = esn.init_hidden()
+
+ # Zero grad
+ esn.zero_grad()
+
+ # Null loss
+ loss = 0
+
+ # For each input
+ for pos in range(sample_length-1):
+ # Compute next state
+ next_hidden = esn(inputs[pos], hidden)
+
+ # Linear output
+ out = linear(next_hidden)
+ esn_outputs[pos, :] = out.data
+
+ # Add loss
+ loss += criterion(out, outputs[pos])
+ # end for
+
+ # Loss
+ loss.div_(sample_length-1)
+
+ loss.backward()
+
+ # Update parameters
+ for p in linear.parameters():
+ p.data.add_(-learning_rate, p.grad.data)
+ # end for
+
+ # Show the graph only for last sample of iteration
+ #if i_batch == len(dataloader) - 1 and i_sample == len(sample_batched) -1 :
+ """plt.plot(inputs.data.numpy(), c='b')
+ plt.plot(outputs.data.numpy(), c='lightblue')
+ plt.plot(esn_outputs.numpy(), c='r')
+ plt.show()"""
+ # end if
+ # end for
+ # end for
+
+ # Print
+ print(u"Iteration {}, loss {}".format(i_iter, loss.data[0]))
+# end for
diff --git a/ESN/EchoTorch-master/examples/timeserie_prediction/narma10_esn.py b/ESN/EchoTorch-master/examples/timeserie_prediction/narma10_esn.py
new file mode 100644
index 0000000..5ab3c89
--- /dev/null
+++ b/ESN/EchoTorch-master/examples/timeserie_prediction/narma10_esn.py
@@ -0,0 +1,101 @@
+# -*- coding: utf-8 -*-
+#
+# File : examples/timeserie_prediction/switch_attractor_esn
+# Description : NARMA 30 prediction with ESN.
+# Date : 26th of January, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti
+
+
+# Imports
+import torch
+from echotorch.datasets.NARMADataset import NARMADataset
+import echotorch.nn as etnn
+import echotorch.utils
+from torch.autograd import Variable
+from torch.utils.data.dataloader import DataLoader
+import numpy as np
+import mdp
+
+# Dataset params
+train_sample_length = 5000
+test_sample_length = 1000
+n_train_samples = 1
+n_test_samples = 1
+batch_size = 1
+spectral_radius = 0.9
+leaky_rate = 1.0
+input_dim = 1
+n_hidden = 100
+
+# Use CUDA?
+use_cuda = False
+use_cuda = torch.cuda.is_available() if use_cuda else False
+
+# Manual seed
+mdp.numx.random.seed(1)
+np.random.seed(2)
+torch.manual_seed(1)
+
+# NARMA30 dataset
+narma10_train_dataset = NARMADataset(train_sample_length, n_train_samples, system_order=10, seed=1)
+narma10_test_dataset = NARMADataset(test_sample_length, n_test_samples, system_order=10, seed=10)
+
+# Data loader
+trainloader = DataLoader(narma10_train_dataset, batch_size=batch_size, shuffle=False, num_workers=2)
+testloader = DataLoader(narma10_test_dataset, batch_size=batch_size, shuffle=False, num_workers=2)
+
+# ESN cell
+esn = etnn.LiESN(input_dim=input_dim, hidden_dim=n_hidden, output_dim=1, spectral_radius=spectral_radius, learning_algo='inv', leaky_rate=leaky_rate)
+if use_cuda:
+ esn.cuda()
+# end if
+
+# For each batch
+for data in trainloader:
+ # Inputs and outputs
+ inputs, targets = data
+
+ # To variable
+ inputs, targets = Variable(inputs), Variable(targets)
+ if use_cuda: inputs, targets = inputs.cuda(), targets.cuda()
+
+ # Accumulate xTx and xTy
+ esn(inputs, targets)
+# end for
+
+# Finalize training
+esn.finalize()
+
+# Train MSE
+dataiter = iter(trainloader)
+train_u, train_y = dataiter.next()
+train_u, train_y = Variable(train_u), Variable(train_y)
+if use_cuda: train_u, train_y = train_u.cuda(), train_y.cuda()
+y_predicted = esn(train_u)
+print(u"Train MSE: {}".format(echotorch.utils.mse(y_predicted.data, train_y.data)))
+print(u"Test NRMSE: {}".format(echotorch.utils.nrmse(y_predicted.data, train_y.data)))
+print(u"")
+
+# Test MSE
+dataiter = iter(testloader)
+test_u, test_y = dataiter.next()
+test_u, test_y = Variable(test_u), Variable(test_y)
+if use_cuda: test_u, test_y = test_u.cuda(), test_y.cuda()
+y_predicted = esn(test_u)
+print(u"Test MSE: {}".format(echotorch.utils.mse(y_predicted.data, test_y.data)))
+print(u"Test NRMSE: {}".format(echotorch.utils.nrmse(y_predicted.data, test_y.data)))
+print(u"")
diff --git a/ESN/EchoTorch-master/examples/timeserie_prediction/narma10_esn_sgd.py b/ESN/EchoTorch-master/examples/timeserie_prediction/narma10_esn_sgd.py
new file mode 100644
index 0000000..93bc453
--- /dev/null
+++ b/ESN/EchoTorch-master/examples/timeserie_prediction/narma10_esn_sgd.py
@@ -0,0 +1,118 @@
+# -*- coding: utf-8 -*-
+#
+# File : examples/timeserie_prediction/switch_attractor_esn
+# Description : NARMA 30 prediction with ESN.
+# Date : 26th of January, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti
+
+
+# Imports
+import torch
+import torch.optim as optim
+from echotorch.datasets.NARMADataset import NARMADataset
+import echotorch.nn as etnn
+import echotorch.utils
+import torch.nn as nn
+from torch.autograd import Variable
+from torch.utils.data.dataloader import DataLoader
+import numpy as np
+import mdp
+import matplotlib.pyplot as plt
+
+# Parameters
+spectral_radius = 0.9
+leaky_rate = 1.0
+learning_rate = 0.04
+input_dim = 1
+n_hidden = 100
+n_iterations = 2000
+train_sample_length = 5000
+test_sample_length = 1000
+n_train_samples = 1
+n_test_samples = 1
+batch_size = 1
+momentum = 0.95
+weight_decay = 0
+
+# Use CUDA?
+use_cuda = True
+use_cuda = torch.cuda.is_available() if use_cuda else False
+
+# Manual seed
+mdp.numx.random.seed(1)
+np.random.seed(2)
+torch.manual_seed(1)
+
+# NARMA30 dataset
+narma10_train_dataset = NARMADataset(train_sample_length, n_train_samples, system_order=10, seed=1)
+narma10_test_dataset = NARMADataset(test_sample_length, n_test_samples, system_order=10, seed=10)
+
+# Data loader
+trainloader = DataLoader(narma10_train_dataset, batch_size=batch_size, shuffle=False, num_workers=2)
+testloader = DataLoader(narma10_test_dataset, batch_size=batch_size, shuffle=False, num_workers=2)
+
+# ESN cell
+esn = etnn.ESN(input_dim=input_dim, hidden_dim=n_hidden, output_dim=1, spectral_radius=spectral_radius, learning_algo='grad')
+if use_cuda:
+ esn.cuda()
+# end if
+
+# Objective function
+criterion = nn.MSELoss()
+
+# Stochastic Gradient Descent
+optimizer = optim.SGD(esn.parameters(), lr=learning_rate, momentum=momentum, weight_decay=weight_decay)
+
+# For each iteration
+for epoch in range(n_iterations):
+ # Iterate over batches
+ for data in trainloader:
+ # Inputs and outputs
+ inputs, targets = data
+ inputs, targets = Variable(inputs), Variable(targets)
+ if use_cuda: inputs, targets = inputs.cuda(), targets.cuda()
+
+ # Gradients to zero
+ optimizer.zero_grad()
+
+ # Forward
+ out = esn(inputs)
+ loss = criterion(out, targets)
+
+ # Backward pass
+ loss.backward()
+
+ # Optimize
+ optimizer.step()
+
+ # Print error measures
+ print(u"Train MSE: {}".format(float(loss.data)))
+ print(u"Train NRMSE: {}".format(echotorch.utils.nrmse(out.data, targets.data)))
+ # end for
+
+ # Test reservoir
+ dataiter = iter(testloader)
+ test_u, test_y = dataiter.next()
+ test_u, test_y = Variable(test_u), Variable(test_y)
+ if use_cuda: test_u, test_y = test_u.cuda(), test_y.cuda()
+ y_predicted = esn(test_u)
+
+ # Print error measures
+ print(u"Test MSE: {}".format(echotorch.utils.mse(y_predicted.data, test_y.data)))
+ print(u"Test NRMSE: {}".format(echotorch.utils.nrmse(y_predicted.data, test_y.data)))
+ print(u"")
+# end for
diff --git a/ESN/EchoTorch-master/examples/timeserie_prediction/narma10_gated_esn.py b/ESN/EchoTorch-master/examples/timeserie_prediction/narma10_gated_esn.py
new file mode 100644
index 0000000..343fc91
--- /dev/null
+++ b/ESN/EchoTorch-master/examples/timeserie_prediction/narma10_gated_esn.py
@@ -0,0 +1,134 @@
+# -*- coding: utf-8 -*-
+#
+# File : examples/timeserie_prediction/switch_attractor_esn
+# Description : NARMA 30 prediction with ESN.
+# Date : 26th of January, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti
+
+
+# Imports
+import torch
+import torch.optim as optim
+from echotorch.datasets.NARMADataset import NARMADataset
+import echotorch.nn as etnn
+import echotorch.utils
+import torch.nn as nn
+from torch.autograd import Variable
+from torch.utils.data.dataloader import DataLoader
+import numpy as np
+import mdp
+import matplotlib.pyplot as plt
+
+# Parameters
+spectral_radius = 0.9
+leaky_rate = 1.0
+learning_rate = 0.04
+reservoir_dim = 100
+hidden_dim = 20
+input_dim = 1
+n_hidden = 100
+n_iterations = 2000
+train_sample_length = 5000
+test_sample_length = 1000
+n_train_samples = 1
+n_test_samples = 1
+batch_size = 1
+momentum = 0.95
+weight_decay = 0
+
+# Use CUDA?
+use_cuda = True
+use_cuda = torch.cuda.is_available() if use_cuda else False
+
+# Manual seed
+mdp.numx.random.seed(1)
+np.random.seed(2)
+torch.manual_seed(1)
+
+# NARMA30 dataset
+narma10_train_dataset = NARMADataset(train_sample_length, n_train_samples, system_order=10, seed=1)
+narma10_test_dataset = NARMADataset(test_sample_length, n_test_samples, system_order=10, seed=10)
+
+# Data loader
+trainloader = DataLoader(narma10_train_dataset, batch_size=batch_size, shuffle=False, num_workers=2)
+testloader = DataLoader(narma10_test_dataset, batch_size=batch_size, shuffle=False, num_workers=2)
+
+# Linear output
+linear = nn.Linear(in_features=hidden_dim, out_features=1)
+
+# ESN cell
+gated_esn = etnn.GatedESN(
+ input_dim=input_dim,
+ reservoir_dim=input_dim,
+ pca_dim=hidden_dim,
+ hidden_dim=hidden_dim,
+ leaky_rate=leaky_rate,
+ spectral_radius=spectral_radius
+)
+if use_cuda:
+ gated_esn.cuda()
+ linear.cuda()
+# end if
+
+# Objective function
+criterion = nn.MSELoss()
+
+# Stochastic Gradient Descent
+optimizer = optim.SGD(gated_esn.parameters(), lr=learning_rate, momentum=momentum)
+
+# For each iteration
+for epoch in range(n_iterations):
+ # Iterate over batches
+ for data in trainloader:
+ # Inputs and outputs
+ inputs, targets = data
+ inputs, targets = Variable(inputs), Variable(targets)
+ if use_cuda: inputs, targets = inputs.cuda(), targets.cuda()
+ print(inputs)
+ print(targets)
+ # Gradients to zero
+ optimizer.zero_grad()
+
+ # Forward
+ out = linear(gated_esn(inputs))
+ print(out)
+ exit()
+ loss = criterion(out, targets)
+
+ # Backward pass
+ loss.backward()
+
+ # Optimize
+ optimizer.step()
+
+ # Print error measures
+ print(u"Train MSE: {}".format(float(loss.data)))
+ print(u"Train NRMSE: {}".format(echotorch.utils.nrmse(out.data, targets.data)))
+ # end for
+
+ # Test reservoir
+ dataiter = iter(testloader)
+ test_u, test_y = dataiter.next()
+ test_u, test_y = Variable(test_u), Variable(test_y)
+ if use_cuda: test_u, test_y = test_u.cuda(), test_y.cuda()
+ y_predicted = esn(test_u)
+
+ # Print error measures
+ print(u"Test MSE: {}".format(echotorch.utils.mse(y_predicted.data, test_y.data)))
+ print(u"Test NRMSE: {}".format(echotorch.utils.nrmse(y_predicted.data, test_y.data)))
+ print(u"")
+# end for
diff --git a/ESN/EchoTorch-master/examples/timeserie_prediction/narma10_stacked_esn.py b/ESN/EchoTorch-master/examples/timeserie_prediction/narma10_stacked_esn.py
new file mode 100644
index 0000000..842c067
--- /dev/null
+++ b/ESN/EchoTorch-master/examples/timeserie_prediction/narma10_stacked_esn.py
@@ -0,0 +1,78 @@
+# -*- coding: utf-8 -*-
+#
+# File : examples/timeserie_prediction/switch_attractor_esn
+# Description : NARMA 30 prediction with ESN.
+# Date : 26th of January, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti
+
+
+# Imports
+import torch
+from echotorch.datasets.NARMADataset import NARMADataset
+import echotorch.nn as etnn
+import echotorch.utils
+from torch.autograd import Variable
+from torch.utils.data.dataloader import DataLoader
+import numpy as np
+import mdp
+
+# Dataset params
+train_sample_length = 5000
+test_sample_length = 1000
+n_train_samples = 1
+n_test_samples = 1
+batch_size = 1
+spectral_radius = 0.9
+leaky_rates = [1.0, 0.5, 0.1]
+input_dim = 1
+n_hidden = [100, 100, 100]
+
+# Use CUDA?
+use_cuda = False
+use_cuda = torch.cuda.is_available() if use_cuda else False
+
+# Manual seed
+mdp.numx.random.seed(1)
+np.random.seed(2)
+torch.manual_seed(1)
+
+# NARMA30 dataset
+narma10_train_dataset = NARMADataset(train_sample_length, n_train_samples, system_order=10, seed=1)
+narma10_test_dataset = NARMADataset(test_sample_length, n_test_samples, system_order=10, seed=10)
+
+# Data loader
+trainloader = DataLoader(narma10_train_dataset, batch_size=batch_size, shuffle=False, num_workers=2)
+testloader = DataLoader(narma10_test_dataset, batch_size=batch_size, shuffle=False, num_workers=2)
+
+# ESN cell
+esn = etnn.StackedESN(input_dim=input_dim, hidden_dim=n_hidden, output_dim=1, spectral_radius=spectral_radius, learning_algo='inv', leaky_rate=leaky_rates)
+
+# For each batch
+for data in trainloader:
+ # Inputs and outputs
+ inputs, targets = data
+
+ # To variable
+ inputs, targets = Variable(inputs), Variable(targets)
+ if use_cuda: inputs, targets = inputs.cuda(), targets.cuda()
+
+ # Accumulate xTx and xTy
+ hidden_states = esn(inputs, targets)
+ for i in range(10):
+ print(hidden_states[0, i])
+ # end if
+# end for
\ No newline at end of file
diff --git a/ESN/EchoTorch-master/examples/unsupervised_learning/sfa_logmap.py b/ESN/EchoTorch-master/examples/unsupervised_learning/sfa_logmap.py
new file mode 100644
index 0000000..4e8782e
--- /dev/null
+++ b/ESN/EchoTorch-master/examples/unsupervised_learning/sfa_logmap.py
@@ -0,0 +1,49 @@
+# -*- coding: utf-8 -*-
+#
+
+# Imports
+import mdp
+import numpy as np
+import matplotlib.pyplot as plt
+
+# Init. random
+np.random.seed(0)
+
+# Parameters
+n = 10000
+p2 = np.pi * 2
+t = np.linspace(0, 1, n, endpoint=0)
+dforce = np.sin(p2*5*t) + np.sin(p2*11*t) + np.sin(p2*13*t)
+
+
+def logistic_map(x, r):
+ return r*x*(1-x)
+# end logistic_map
+
+# Series
+series = np.zeros((n, 1), 'd')
+series[0] = 0.6
+
+# Create series
+for i in range(1, n):
+ series[i] = logistic_map(series[i-1], 3.6+0.13*dforce[i])
+# end for
+
+# MDP flow
+flow = (mdp.nodes.EtaComputerNode() +
+ mdp.nodes.TimeFramesNode(10) +
+ mdp.nodes.PolynomialExpansionNode(3) +
+ mdp.nodes.SFA2Node(output_dim=1) +
+ mdp.nodes.EtaComputerNode())
+
+# Train
+flow.train(series)
+
+# Slow
+slow = flow(series)
+
+resc_dforce = (dforce - np.mean(dforce, 0)) / np.std(dforce, 0)
+
+print(u"{}".format(mdp.utils.cov2(resc_dforce[:-9], slow)))
+print(u"Eta value (time serie) : {}".format(flow[0].get_eta(t=10000)))
+print(u"Eta value (slow feature) : {}".format(flow[-1].get_eta(t=9996)))
diff --git a/ESN/EchoTorch-master/examples/validation/validation_10cv.py b/ESN/EchoTorch-master/examples/validation/validation_10cv.py
new file mode 100644
index 0000000..0082a5e
--- /dev/null
+++ b/ESN/EchoTorch-master/examples/validation/validation_10cv.py
@@ -0,0 +1,55 @@
+# -*- coding: utf-8 -*-
+#
+# File : examples/timeserie_prediction/switch_attractor_esn
+# Description : NARMA 30 prediction with ESN.
+# Date : 26th of January, 2018
+#
+# This file is part of EchoTorch. EchoTorch is free software: you can
+# redistribute it and/or modify it under the terms of the GNU General Public
+# License as published by the Free Software Foundation, version 2.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License along with
+# this program; if not, write to the Free Software Foundation, Inc., 51
+# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Copyright Nils Schaetti
+
+
+# Imports
+import torch.utils.data
+from echotorch import datasets
+from echotorch.transforms import text
+
+
+# Reuters C50 dataset
+reutersloader = torch.utils.data.DataLoader(
+ datasets.ReutersC50Dataset(root="../../data/reutersc50/", download=True, n_authors=2,
+ transform=text.Token(), dataset_size=2, dataset_start=20),
+ batch_size=1, shuffle=True)
+
+# For each batch
+for k in range(10):
+ # Set fold and training mode
+ reutersloader.dataset.set_fold(k)
+ reutersloader.dataset.set_train(True)
+
+ # Get training data for this fold
+ for i, data in enumerate(reutersloader):
+ # Inputs and labels
+ inputs, label, labels = data
+ # end for
+
+ # Set test mode
+ reutersloader.dataset.set_train(False)
+
+ # Get test data for this fold
+ for i, data in enumerate(reutersloader):
+ # Inputs and labels
+ inputs, label, labels = data
+ # end for
+# end for
diff --git a/ESN/EchoTorch-master/requirements.txt b/ESN/EchoTorch-master/requirements.txt
new file mode 100644
index 0000000..903b0de
--- /dev/null
+++ b/ESN/EchoTorch-master/requirements.txt
@@ -0,0 +1,6 @@
+# This is an implicit value, here for clarity
+--index-url https://pypi.python.org/simple/
+
+sphinx_bootstrap_theme
+http://download.pytorch.org/whl/cu75/torch-0.1.11.post5-cp27-none-linux_x86_64.whl
+torchvision
\ No newline at end of file
diff --git a/ESN/EchoTorch-master/setup.py b/ESN/EchoTorch-master/setup.py
new file mode 100644
index 0000000..d6b2498
--- /dev/null
+++ b/ESN/EchoTorch-master/setup.py
@@ -0,0 +1,18 @@
+from setuptools import setup, find_packages
+
+setup(name='EchoTorch',
+ version='0.1.2',
+ description="A Python toolkit for Reservoir Computing.",
+ long_description="A Python toolkit for Reservoir Computing and Echo State Network experimentation based on pyTorch.",
+ author='Nils Schaetti',
+ author_email='nils.schaetti@unine.ch',
+ license='GPLv3',
+ packages=find_packages(),
+ install_requires=[
+ 'torch',
+ 'numpy',
+ 'torchvision'
+ ],
+ zip_safe=False
+ )
+
diff --git a/ESN/test b/ESN/test
new file mode 100644
index 0000000..8b13789
--- /dev/null
+++ b/ESN/test
@@ -0,0 +1 @@
+