Compare commits

..

188 Commits

Author SHA1 Message Date
jrandom
77310e17d1 * 2007-02-15 0.6.1.27 released
2007-02-15  jrandom
    * Limit the whispering floodfill sends to at most 3 randomly
      chosen from the known floodfill peers
2007-02-15 23:25:04 +00:00
jrandom
e54b964929 2007-02-14 jrandom
* Don't filter out KICK and H(ide oper status) IRC messages
      (thanks Takk and postman!)
2007-02-14 21:35:43 +00:00
jrandom
809f3e847b 2007-02-13 jrandom
* Tell our peers about who we know in the floodfill netDb every
      6 hours or so, mitigating the situation where peers lose track
      of floodfill routers.
    * Disable the Syndie updater (people should use the new Syndie,
      not this one)
    * Disable the eepsite tunnel by default
2007-02-14 04:33:36 +00:00
zzz
f4beebe60d (zzz) 02-13 2007-02-14 03:04:11 +00:00
jrandom
827e427f0b added trac.i2p 2007-02-12 10:26:21 +00:00
zzz
c02125511d (zzz) 02-06 2007-02-08 18:51:58 +00:00
zzz
1aa1069b6f (zzz) 01-30 2007-02-02 02:50:43 +00:00
zzz
91d281077d 2007-01-30 zzz
* i2psnark: Don't hold _snarks lock while checking a snark,
      so web page is responsive at startup
2007-01-30 08:58:19 +00:00
zzz
f339dec024 2007-01-29 zzz
* i2psnark: Add NickyB tracker
2007-01-30 04:05:21 +00:00
zzz
2aeef44f8d 2007-01-28 zzz
* i2psnark: Don't hold sendQueue lock while flushing output,
      to make everything run smoother
2007-01-29 04:03:36 +00:00
zzz
0fd41a9490 2007-01-27 zzz
* i2psnark: Fix orphaned Snark reader tasks leading to OOMs
2007-01-28 02:30:05 +00:00
complication
58f10d14b2 2007-01-20 Complication
* Drop overlooked comment
2007-01-21 03:49:41 +00:00
complication
46ca42ddf8 2007-01-20 Complication
* Modify ReseedHandler to query the "i2p.reseedURL" property from I2PAppContext
      instead of System, so setting a reseed URL in advanced configuration has effect.
    * Clean out obsolete reseed code from ConfigNetHandler.
2007-01-21 03:35:49 +00:00
zzz
e6e6d6f4ee 2007-01-20 zzz
* Improve performance by not reading in the whole
      piece from disk for each request. A huge memory savings
      on 1MB torrents with many peers.
2007-01-21 01:43:31 +00:00
zzz
8a87df605b 2007-01-20 zzz
* i2psnark: More choking rotation tweaks
    * Improve performance by not reading in the whole
      piece from disk for each request. A huge memory savings
      on 1MB torrents with many peers.
2007-01-21 00:35:09 +00:00
zzz
8ca085bceb (zzz) 1/16 2007-01-19 03:04:49 +00:00
zzz
df47587db0 * Add new HTTP Proxy error message for non-http protocols 2007-01-18 01:42:13 +00:00
zzz
d705e0ad04 2007-01-17 zzz
* Add note on Syndie index.html steering people to new Syndie
2007-01-17 05:13:27 +00:00
zzz
40d209dd7c 2007-01-17 zzz
* i2psnark: Fix crash when autostart off and
      tcrrent started manually
2007-01-16 06:20:23 +00:00
zzz
7f2a0457bf 2007-01-16 zzz
* i2psnark: Fix bug caused by last i2psnark checkin
      (ConnectionAcceptor not started)
    * Don't start PeerCoordinator, ConnectionAcceptor,
      and TrackerClient unless starting torrent
2007-01-16 04:47:58 +00:00
jrandom
f4749f2483 2007-01-15 jrandom
* small guard against unnecessary streaming lib reset packets
      (thanks Complication!)
2007-01-15 06:35:59 +00:00
zzz
9c42830076 2007-01-15 zzz
* i2psnark: Add 'Stop All' link on web page
    * Add some links to trackers and forum on web page
    * Don't start tunnel if 'Autostart' unchecked
    * Fix torrent restart bug by reopening file descriptors
2007-01-15 04:36:03 +00:00
zzz
53ba6c2a64 2007-01-14 zzz
* i2psnark: Improvements for torrents with > 4 leechers:
      choke based on upload rate when seeding, and
      be smarter and fairer about rotating choked peers.
    * Handle two common i2psnark OOM situations rather
      than shutting down the whole thing.
    * Fix reporting to tracker of remaining bytes for
      torrents > 4GB (but ByteMonsoon still has a bug)
2007-01-14 19:49:33 +00:00
zzz
61b3f21f69 (zzz) 01-09 2007-01-13 01:15:04 +00:00
zzz
506fd5f889 (zzz) Jan. 2 2007-01-03 03:19:06 +00:00
zzz
d538f888b4 (zzz) 12-19,12-26 2006-12-28 09:11:30 +00:00
jrandom
976c5fdd47 new syndie httpserv 2006-12-16 22:31:07 +00:00
zzz
b63f3437f2 (zzz) 12-12 mtg 2006-12-13 06:40:15 +00:00
zzz
e760f2e538 (zzz) 12/5 mtg 2006-12-06 02:52:13 +00:00
zzz
17c8fca779 (zzz) 11-28 mtg 2006-11-29 01:37:18 +00:00
zzz
87fda382c3 (zzz) 11/14 and 11/21 mtgs 2006-11-22 02:31:23 +00:00
zzz
1e404cd7ac 2006-10-29 zzz
* i2psnark: Fix and enable generation of multifile torrents,
      print error if no tracker selected at create-torrent,
      fix stopping a torrent that hasn't started successfully,
      add eBook and GayTorrents trackers to form,
      web page formatting tweaks
2006-11-10 01:44:35 +00:00
zzz
098f99d806 (zzz) 11/7 mtg 2006-11-10 01:34:54 +00:00
zzz
da93f96035 (zzz) 10/31 mtg 2006-11-02 02:39:07 +00:00
complication
ead39cc87e 2006-10-29 Complication
* Ensure we get NTP samples from more diverse sources
      (0.pool.ntp.org, 1.pool.ntp.org, etc)
    * Discard median-based peer skew calculator as framed average works,
      and adjusting its percentage can make it behave median-like
    * Require more data points (from at least 20 peers)
      before considering a peer skew measurement reliable
2006-10-29 19:29:50 +00:00
zzz
e4e3c44459 (zzz) 10/24 2006-10-25 08:22:09 +00:00
zzz
af151e32e5 (zzz) 10/17 mtg 2006-10-21 08:22:24 +00:00
jrandom
12819a2a17 added mtn.i2p 2006-10-15 02:10:36 +00:00
jrandom
87eedff254 0.6.1.26 2006-10-09 05:10:22 +00:00
jrandom
4c59cd7621 * 2006-10-10 0.6.1.26 released
2006-10-10  jrandom
    * Removed the status display from the console, as its more confusing
      than informative (though the content is still displayed in the HTML)
2006-10-09 01:44:47 +00:00
complication
ef707e7956 2006-10-08 Complication
* Update comment to reflect current status
2006-10-08 23:10:58 +00:00
complication
73cf3fb299 2006-10-08 Complication
* Add a framed average peer clock skew calculator
    * Add config property "router.clockOffsetSanityCheck" to determine
      if NTP-suggested clock offsets get sanity checked (default "true")
    * Reject NTP-suggested clock offsets if they'd increase peer clock skew
      by more than 5 seconds, or make it more than 20 seconds total
    * Decrease log level in getMedianPeerClockSkew()
2006-10-08 22:52:59 +00:00
zzz
80b0c97d72 (zzz) 10/3 status notes 2006-10-04 19:41:09 +00:00
zzz
5cf85c1d7b (zzz)
* i2psnark: Second try at synchronization fix - synch addRequest()
      completely rather than just portions of it and requestNextPiece()
2006-09-29 23:54:17 +00:00
jrandom
c14e52ceb5 2006-09-27 jrandom
* added HMAC-SHA256
    * properly use CRLF with EepPost
    * suppress jbigi/jcpuid messages if jbigi.dontLog/jcpuid.dontLog is set
    * PBE session key generation (with 1000 rounds of SHA256)
    * misc SDK helper functions
2006-09-27 06:00:33 +00:00
complication
32a579e480 2006-09-26 Complication
* Take back another inadverent logging change in NTCPConnection
2006-09-27 04:44:13 +00:00
complication
0a240a4436 2006-09-26 Complication
* Take back an accidental log level change
2006-09-27 04:31:34 +00:00
complication
9325b806e4 2006-09-26 Complication
* Subclass from Clock a RouterClock which can access router transports,
      with the goal of developing it to second-guess NTP results
    * Make transports report clock skew in seconds
    * Adjust renderStatusHTML() methods accordingly
    * Show average for NTCP clock skews too
    * Give transports a getClockSkews() method to report clock skews
    * Give transport manager a getClockSkews() method to aggregate results
    * Give comm system facade a getMedianPeerClockSkew() method which RouterClock calls
      (to observe results, add "net.i2p.router.transport.CommSystemFacadeImpl=WARN" to
logging)
    * Extra explicitness in NTCP classes to denote unit of time.
    * Fix some places in NTCPConnection where milliseconds and seconds were confused
2006-09-27 04:02:13 +00:00
zzz
ef2e24ea11 (zzz)
* i2psnark: Paranoid copy before writing pieces,
      recheck files on completion, redownload bad pieces
    * i2psnark: Don't contact tracker as often when seeding
2006-09-26 03:11:39 +00:00
zzz
373934c6e0 (zzz)
* i2psnark: Add some synchronization to prevent rare problem
      after restoring orphan piece
2006-09-24 18:30:22 +00:00
zzz
e8e8bac694 (zzz)
* i2psnark: Eliminate duplicate requests caused by i2p-bt's
      rapid choke/unchokes
    * i2psnark: Truncate long TrackerErr messages on web page
2006-09-20 22:39:24 +00:00
zzz
23e8a558c2 (zzz)
* i2psnark: Implement retransmission of requests. This
      eliminates one cause of complete stalls with a peer.
      This problem is common on torrents with a small number of
      active peers where there are no choke/unchokes to kickstart things.
2006-09-16 21:07:28 +00:00
complication
46f2645834 2006-09-14 Complication
* news.xml update
2006-09-14 03:16:53 +00:00
zzz
2329439034 (zzz)
* i2psnark: Fix restoral of partial pieces broken by last patch
2006-09-14 02:37:32 +00:00
zzz
6d400368b9 (zzz) changelog date fix 2006-09-13 23:24:14 +00:00
zzz
26c13b40fe (zzz)
* i2psnark: Mark a peer's requests as unrequested on disconnect,
      preventing premature end game
    * i2psnark: Randomize selection of next piece during end game
    * i2psnark: Don't restore a partial piece to a peer that is already working on it
    * i2psnark: strip ".torrent" on web page
    * i2psnark: Limit piece size in generated torrent to 1MB max
2006-09-13 23:02:07 +00:00
zzz
9fd0e95fe8 (zzz) 9/12 status 2006-09-13 17:34:58 +00:00
zzz
7e21f2c92b (zzz)
* i2psnark: Add "Stalled" indication and stat totals on web page
2006-09-10 01:55:37 +00:00
zzz
c9d8e796c6 (zzz)
* i2psnark: Fix bug where new peers would always be set to "interested"
      regardless of actual interest
    * i2psnark: Reduce max piece size from 10MB to 1MB; larger may have severe
      memory and efficiency problems
2006-09-09 22:15:05 +00:00
zzz
e7203f5d46 (zzz) 0.6.1.25 2006-09-09 21:19:49 +00:00
jrandom
22d76a1b64 * 2006-09-09 0.6.1.25 released 2006-09-09 17:46:21 +00:00
jrandom
0903dc46c6 2006-09-08 jrandom
* Tweak the PRNG logging so it only displays error messages if there are
      problems
    * Disable dynamic router keys for the time being, as they don't offer
      meaningful security, may hurt the router, and makes it harder to
      determine the network health.  The code to restart on SSU IP change is
      still enabled however.
    * Disable tunnel load testing, leaning back on the tiered selection for
      the time being.
    * Spattering of bugfixes
2006-09-09 01:41:57 +00:00
zzz
0f56ec8078 (zzz) oops remove duplicate 2006-09-07 23:26:53 +00:00
zzz
70ee1df2bf (zzz)
* i2psnark: Increase output timeout from 2 min to 4 min
    * i2psnark: Orphan debug msg cleanup
    * i2psnark: More web rate report cleanup
2006-09-07 23:22:12 +00:00
zzz
61a6a29bec cCVS: ---------------------------------------------------------------------- 2006-09-07 23:03:18 +00:00
zzz
678f7d8f72 (zzz)
* i2psnark: Implement basic partial-piece saves across connections
    * i2psnark: Implement keep-alive sending. This will keep non-i2psnark clients
      from dropping us for inactivity but also renders the 2-minute transmit-inactivity
      code in i2psnark ineffective. Will have to research why there is transmit but
      not receive inactivity code. With the current connection limit of 24 peers
      we aren't in any danger of keeping out new peers by keeping inactive ones.
    * i2psnark: Increase CHECK_PERIOD from 20 to 40 since nothing happens in 20 seconds
    * i2psnark: Fix dropped chunk handling
    * i2psnark: Web rate report cleanup
2006-09-06 06:32:53 +00:00
zzz
b92ee364bc (zzz)
* i2psnark: Report cleared trackerErr immediately
    * i2psnark: Add trackerErr reporting after previous success; retry more quickly
    * i2psnark: Set up new connections more quickly
    * i2psnark: Don't delay tracker fetch when setting up lots of connections
    * i2psnark: Reduce MAX_UPLOADERS from 12 to 4
2006-09-04 08:26:21 +00:00
zzz
aef19fcd38 (zzz) i2psnark: enable pipelining, set tunnel length default to 1 + 0-1 2006-09-04 06:01:53 +00:00
zzz
3b01df1d2c (zzz) Add rate reporting to i2psnark 2006-09-03 09:12:22 +00:00
complication
4aed23b198 2006-09-03 Complication
* Limit form size in SusiDNS to avoid exceeding a POST size limit on postback
    * Print messages about addressbook size to give better overview
    * Enable delete function in published addressbook
2006-09-03 06:37:46 +00:00
complication
03e8875c27 2006-08-21 Complication
* Fix error reporting discrepancy (thanks for helping notice, yojoe!)
2006-08-21 05:55:33 +00:00
complication
48921a0875 2006-08-03 Complication
* news.xml update
2006-08-04 00:49:40 +00:00
jrandom
633fabb09e 2006-08-03 jrandom
* Decrease the recently modified tunnel building timeout, though keep
      the scaling on their processing
2006-08-03 22:34:24 +00:00
jrandom
bc42c26d94 2006-07-31 jrandom
* Increase the tunnel building timeout
    * Avoid a rare race (thanks bar!)
    * Fix the bandwidth capacity publishing code to factor in share percentage
      and outbound throttling (oops)
2006-08-01 02:26:52 +00:00
complication
3c09ca3359 2006-07-29 Complication
* Treat NTP responses from unexpected stratums like failures
2006-07-30 05:08:20 +00:00
zzz
1e9e7dd345 (zzz) 0.6.1.24 2006-07-29 23:02:57 +00:00
jrandom
034803add7 * 2006-07-28 0.6.1.24 released 2006-07-29 18:03:14 +00:00
jrandom
b25bb053bb 2006-07-28 jrandom
* Don't try to reverify too many netDb entries at once (thanks
      cervantes and Complication!)
2006-07-29 04:41:15 +00:00
jrandom
9bd0c79441 2006-07-28 jrandom
* Actually fix the threading deadlock issue in the netDb (removing
      the synchronized access to individual kbuckets while validating
      individual entries) (thanks cervantes, postman, frosk, et al!)
2006-07-29 01:11:50 +00:00
jrandom
06b8670410 * 2006-07-27 0.6.1.23 released 2006-07-28 03:34:59 +00:00
jrandom
6577ae499f 2006-07-27 jrandom
* Cut down NTCP connection establishments once we know the peer is skewed
      (rather than wait for full establishment before verifying)
    * Removed a lock on the stats framework when accessing rates, which
      shouldn't be a problem, assuming rates are created (pretty much) all at
      once and merely updated during the lifetime of the jvm.
2006-07-27 23:40:00 +00:00
jrandom
54bc5485ec oops, thanks bar! 2006-07-27 07:06:55 +00:00
jrandom
84b741ac98 2006-07-27 jrandom
* Further NTCP write status cleanup
    * Handle more oddly-timed NTCP disconnections (thanks bar!)
2006-07-27 06:20:25 +00:00
jrandom
c48c419d74 quick prng workaround 2006-07-27 01:34:31 +00:00
jrandom
fb2e795add 2006-07-26 jrandom
* When dropping a netDb router reference, only accept newer
      references as part of the update check
    * If we have been up for a while, don't accept really old
      router references (published 2 or more days ago)
    * Drop router references once they are no longer valid, even if
      they were allowed in due to the lax restrictions on startup
2006-07-27 01:04:59 +00:00
jrandom
ec215777ec 2006-07-26 jrandom
* When dropping a netDb router reference, only accept newer
      references as part of the update check
    * If we have been up for a while, don't accept really old
      router references (published 2 or more days ago)
    * Drop router references once they are no longer valid, even if
      they were allowed in due to the lax restrictions on startup
2006-07-27 00:56:49 +00:00
jrandom
d4e0f27c56 2006-07-26 jrandom
* Every time we create a new router identity, add an entry to the
      new "identlog.txt" text file in the I2P install directory.  For
      debugging purposes, publish the count of how many identities the
      router has cycled through, though not the identities itself.
    * Cleaned up the way the multitransport shitlisting worked, and
      added per-transport shitlists
    * When dropping a router reference locally, first fire a netDb
      lookup for the entry
    * Take the peer selection filters into account when organizing the
      profiles (thanks Complication!)
    * Avoid some obvious configuration errors for the NTCP transport
      (invalid ports, "null" ip, etc)
    * Deal with some small NTCP bugs found in the wild (unresolveable
      hosts, strange network discons, etc)
    * Send our netDb info to peers we have direct NTCP connections to
      after each 6-12 hours of connection uptime
    * Clean up the NTCP reading and writing queue logic to avoid some
      potential delays
    * Allow people to specify the IP that the SSU transport binds on
      locally, via the advanced config "i2np.udp.bindInterface=1.2.3.4"
2006-07-26 06:36:18 +00:00
complication
e1c686baa6 2006-07-18 Complication
* URL and date fix in news.xml
2006-07-18 23:35:54 +00:00
jrandom
d57af1aef4 remove 1.5ism 2006-07-18 20:20:08 +00:00
jrandom
a52dd57215 * 2006-07-18 0.6.1.22 released
2006-07-18  jrandom
    * Add a failsafe to the NTCP transport to make sure we keep
      pumping writes when we should.
    * Properly reallow 16-32KBps routers in the default config
      (thanks Complication!)
2006-07-18 20:08:00 +00:00
complication
65138357d3 2006-07-16 Complication
* Collect tunnel build agree/reject/expire statistics
      for each bandwidth tier of peers (and peers of unknown tiers,
      even if those shouldn't exist)
2006-07-16 17:20:46 +00:00
jrandom
f6320696dd 2006-07-14 jrandom
* Improve the multitransport shitlisting (thanks Complication!)
    * Allow routers with a capacity of 16-32KBps to be used in tunnels under
      the default configuration (thanks for the stats Complication!)
    * Properly allow older router references to load on startup
      (thanks bar, Complication, et al!)
    * Add a new "i2p.alwaysAllowReseed" advanced config property, though
      hopefully today's changes should make this unnecessary (thanks void!)
    * Improved NTCP buffering
    * Close NTCP connections if we are too backlogged when writing to them
2006-07-14 18:08:44 +00:00
jrandom
900d8a2026 oops, test method. thanks cervantes 2006-07-07 19:04:24 +00:00
jrandom
ccc9a87e8c unnecessary 2006-07-07 18:59:16 +00:00
jrandom
208634e5de 2006-07-04 jrandom
* New NIO-based tcp transport (NTCP), enabled by default for outbound
      connections only.  Those who configure their NAT/firewall to allow
      inbound connections and specify the external host and port
      (dyndns/etc is ok) on /config.jsp can receive inbound connections.
      SSU is still enabled for use by default for all users as a fallback.
    * Substantial bugfix to the tunnel gateway processing to transfer
      messages sequentially instead of interleaved
    * Renamed GNU/crypto classes to avoid name clashes with kaffe and other
      GNU/Classpath based JVMs
    * Adjust the Fortuna PRNG's pooling system to reduce contention on
      refill with a background thread to refill the output buffer
    * Add per-transport support for the shitlist
    * Add a new async pumped tunnel gateway to reduce tunnel dispatcher
      contention
2006-07-04 21:17:44 +00:00
complication
3d07205c9d 2006-07-01 Complication
* Ensure that the I2PTunnel web interface won't update tunnel settings
      for shared clients when a non-shared client is modified
      (thanks for spotting, BarkerJr!)
2006-07-01 22:44:34 +00:00
zzz
f0a424a93f (zzz) .21, 6-13 mtg 2006-06-15 22:15:09 +00:00
cervantes
f9b59ee07d 2006-06-14 cervantes
* Small tweak to I2PTunnel CSS, so it looks better with desktops
      that use Bitstream Vera fonts @ 96 dpi
2006-06-14 05:24:34 +00:00
jrandom
b92b9d2618 * 2006-06-14 0.6.1.21 released 2006-06-14 02:17:40 +00:00
jrandom
a3db9429a7 2006-06-13 jrandom
* Use a minimum uptime of 2 hours, not 4 (oops)
2006-06-13 23:29:51 +00:00
jrandom
291a5c9578 2006-06-13 jrandom
* Cut down the proactive rejections due to queue size - if we are
      at the point of having decrypted the request off the queue, might
      as well let it through, rather than waste that decryption
2006-06-13 09:38:48 +00:00
jrandom
0a3281c279 2006-06-11 Kloug
* Bugfix to the I2PTunnel IRC filter to support multiple concurrent
      outstanding pings/pongs
2006-06-11 19:48:37 +00:00
jrandom
23f30ba576 2006-06-10 jrandom
* Further reduction in proactive rejections
2006-06-10 20:14:57 +00:00
jrandom
f3de85c4de 2006-06-09 jrandom
* Don't let the pending tunnel request queue grow beyond reason
      (letting things sit for up to 30s when they fail after 10s
      seems a bit... off)
2006-06-10 00:34:44 +00:00
jrandom
a3a4888e0b 2006-06-08 jrandom
* Be more conservative in the proactive rejections
2006-06-09 01:02:40 +00:00
jrandom
6fd7881f8e thanks bar 2006-06-05 06:41:11 +00:00
complication
381f716769 2006-06-04 Complication
* Stop sending a blank line before USER in susimail.
      Seemed to break in rare cases, thanks for reporting, Brachtus!
2006-06-05 01:33:03 +00:00
jrandom
f2078e1523 * 2006-06-04 0.6.1.20 released
2006-06-04  jrandom
    * Reduce the SSU ack frequency
    * Tweaked the tunnel rejection settings to reject less aggressively
2006-06-04 22:25:08 +00:00
jrandom
f2fb87c88b 2006-05-31 jrandom
* Only send netDb searches to the floodfill peers for the time being
    * Add some proof of concept filters for tunnel participation.  By default,
      it will skip peers with an advertised bandwith of less than 32KBps or
      an advertised uptime of less than 2 hours.  If this is sufficient, a
      safer implementation of these filters will be implemented.
2006-05-31 23:23:37 +00:00
complication
fcbea19478 2006-05-30 Complication
* weekly news.xml update
2006-05-31 03:19:24 +00:00
complication
92f25bd4fa 2006-05-18 Complication
* news.xml update
2006-05-19 00:17:10 +00:00
jrandom
85c2c11217 * 2006-05-18 0.6.1.19 released
2006-05-18  jrandom
    * Made the SSU ACKs less frequent when possible
2006-05-18 22:31:06 +00:00
complication
de1ca4aea4 2006-05-17 Complication
* Fix some oversights in my previous changes:
      adjust some loglevels, make a few statements less wasteful,
      make one comparison less confusing and more likely to log unexpected values
2006-05-18 03:42:55 +00:00
jrandom
a0f865fb99 2006-05-17 jrandom
* Make the peer page sortable
    * SSU modifications to cut down on unnecessary connection failures
2006-05-18 03:00:48 +00:00
jrandom
2c3fea5605 2006-05-16 jrandom
* Further shitlist randomizations
    * Adjust the stats monitored for detecting cpu overload when dropping new
      tunnel requests
2006-05-16 18:34:08 +00:00
jrandom
ba1d88b5c9 2006-05-15 jrandom
* Add a load dependent throttle on the pending inbound tunnel request
      backlog
    * Increased the tunnel test failure slack before killing a tunnel
2006-05-15 17:33:02 +00:00
complication
2ad715c668 2006-05-13 Complication
* Update the build number too
2006-05-14 05:11:36 +00:00
complication
5f17557e54 2006-05-13 Complication
* Separate growth factors for tunnel count and tunnel test time
    * Reduce growth factors, so probabalistic throttle would activate
    * Square probAccept values to decelerate stronger when far from average
    * Create a bandwidth stat with approximately 15-second half life
    * Make allowTunnel() check the 1-second bandwidth for overload
      before doing allowance calculations using 15-second bandwidth
    * Tweak the overload detector in BuildExecutor to be more sensitive
      for rising edges, add ability to initiate tunnel drops
    * Add a function to seek and drop the highest-rate participating tunnel,
      keeping a fixed+random grace period between such drops.
      It doesn't seem very effective, so disabled by default
      ("router.dropTunnelsOnOverload=true" to enable)
2006-05-14 04:52:44 +00:00
jrandom
2ad5a6f907 2006-05-11 jrandom
* PRNG bugfix (thanks cervantes and Complication!)
2006-05-12 03:31:44 +00:00
complication
0920462060 2006-05-09 Complication
* weekly news.xml update
2006-05-10 02:38:42 +00:00
jrandom
870e94e184 * 2006-05-09 0.6.1.18 released
2006-05-09  jrandom
    * Further tunnel creation timeout revamp
2006-05-09 21:17:17 +00:00
complication
6b0d507644 2006-05-07 Complication
* Fix problem whereby repeated calls to allowed() would make
      the 1-tunnel exception permit more than one concurrent build
2006-05-08 03:19:46 +00:00
jrandom
70cf9e4ca7 2006-05-06 jrandom
* Readjust the tunnel creation timeouts to reject less but fail earlier,
      while tracking the extended timeout events.
2006-05-06 20:27:34 +00:00
jrandom
2a3974c71d 2006-05-04 jrandom
* Short circuit a highly congested part of the stat logging unless its
      required (may or may not help with a synchronization issue reported by
      andreas)
2006-05-04 23:08:48 +00:00
complication
46ac9292e8 2006-05-03 Complication
* Allow a single build attempt to proceed despite 1-minute overload
      only if the 1-second rate shows enough spare bandwidth
      (e.g. overload has already eased)
2006-05-03 11:13:26 +00:00
complication
4307097472 2006-05-02 Complication
* Correct a misnamed property in SummaryHelper.java
      to avoid confusion
    * Make the maximum allowance of our own concurrent
      tunnel builds slightly adaptive: one concurrent build per 6 KB/s
      within the fixed range 2..10
    * While overloaded, try to avoid completely choking our own build attempts,
      instead prefer limiting them to 1
2006-05-03 04:30:26 +00:00
complication
ed3fdaf4f1 2006-05-02 Complication
* Fixed URL in previous update, sorry
2006-05-03 02:11:06 +00:00
complication
378a9a8f5c 2006-05-02 Complication
* Weekly news.xml update
2006-05-03 02:03:01 +00:00
jrandom
4ef6180455 2006-05-01 jrandom
* Adjust the tunnel build timeouts to cut down on expirations, and
      increased the SSU connection establishment retransmission rate to
      something less glacial.
    * For the first 5 minutes of uptime, be less aggressive with tunnel
      exploration, opting for more reliable peers to start with.
2006-05-01 22:40:21 +00:00
jrandom
d4970e23c0 2006-05-01 jrandom
* Fix for a netDb lookup race (thanks cervantes!)
2006-05-01 19:09:02 +00:00
duck
0c9f165016 fix typos 2006-05-01 15:39:37 +00:00
jrandom
be3a899ecb 2006-04-27 jrandom
* Avoid a race in the message reply registry (thanks cervantes!)
2006-04-28 00:31:20 +00:00
jrandom
7a6a749004 2006-04-27 jrandom
* Fixed the tunnel expiration desync code (thanks Complication!)
2006-04-28 00:08:40 +00:00
complication
17271ee3f0 2006-04-25 Complication
* weekly news.xml update
2006-04-26 02:30:05 +00:00
complication
99bcfa90df 2006-04-24 Complication
* Update news.xml to reflect 0.6.1.17
2006-04-24 12:43:25 +00:00
jrandom
eb36e993c1 * 2006-04-23 0.6.1.17 released 2006-04-23 21:06:12 +00:00
zzz
e5eca5fa45 zzz update 2006-04-22 20:37:21 +00:00
jrandom
8cba2f4236 2006-04-19 jrandom
* Adjust how we pick high capacity peers to allow the inclusion of fast
      peers (the previous filter assumed an old usage pattern)
    * New set of stats to help track per-packet-type bandwidth usage better
    * Cut out the proactive tail drop from the SSU transport, for now
    * Reduce the frequency of tunnel build attempts while we're saturated
    * Don't drop tunnel requests as easily - prefer to explicitly reject them
2006-04-19 17:46:51 +00:00
complication
40d5ed31ac 2006-04-15 Complication
* Update news.xml to reflect 0.6.1.16
2006-04-15 17:25:50 +00:00
jrandom
181275fe35 * 2006-04-15 0.6.1.16 released 2006-04-15 07:58:12 +00:00
jrandom
23d8c01ce7 2006-04-15 jrandom
* Adjust the proactive tunnel request dropping so we will reject what we
      can instead of dropping so much (but still dropping if we get too far
      overloaded)
2006-04-15 07:15:19 +00:00
jrandom
de83944486 2006-04-14 jrandom
* 0 isn't very random
    * Adjust the tunnel drop to be more reasonable
2006-04-14 20:24:07 +00:00
jrandom
90cd7ff23a 2006-04-14 jrandom
* -28.00230115311259 is not between 0 and 1 in any universe I know.
    * Made the bw-related tunnel join throttle much simpler
2006-04-14 18:04:11 +00:00
jrandom
8d0a9b4ccd 2006-04-14 jrandom
* Make some more stats graphable, and allow some internal tweaking on the
      tunnel pairing for creation and testing.
2006-04-14 11:40:35 +00:00
jrandom
230d4cd23f * 2006-04-13 0.6.1.15 released 2006-04-13 12:40:21 +00:00
jrandom
e9b6fcc0a4 2006-04-12 jrandom
* Added a further failsafe against trying to queue up too many messages to
      a peer.
2006-04-13 04:22:06 +00:00
jrandom
8fcb871409 2006-04-12 jrandom
* Watch out for failed syndie index fetches (thanks bar!)
2006-04-12 06:49:01 +00:00
jrandom
83bef43fd5 2006-04-11 jrandom
* Throttling improvements on SSU - throttle all transmissions to a peer
      when we are retransmitting, not just retransmissions.  Also, if
      we're already retransmitting to a peer, probabalistically tail drop new
      messages targetting that peer, based on the estimated wait time before
      transmission.
    * Fixed the rounding error in the inbound tunnel drop probability.
2006-04-11 13:39:06 +00:00
jrandom
b4fc6ca31b 2006-04-10 jrandom
* Include a combined send/receive graph (good idea cervantes!)
    * Proactively drop inbound tunnel requests probabalistically as the
      estimated queue time approaches our limit, rather than letting them all
      through up to that limit.
2006-04-10 05:37:28 +00:00
jrandom
ab3f1b708d 2006-04-08 jrandom
* Stat summarization fix (removing the occational holes in the jrobin
      graphs)
2006-04-09 01:14:08 +00:00
jrandom
c76402a160 2006-04-08 jrandom
* Process inbound tunnel requests more efficiently
    * Proactively drop inbound tunnel requests if the queue before we'd
      process it in is too long (dynamically adjusted by cpu load)
    * Adjust the tunnel rejection throttle to reject requeusts when we have to
      proactively drop too many requests.
    * Display the number of pending inbound tunnel join requests on the router
      console (as the "handle backlog")
    * Include a few more stats in the default set of graphs
2006-04-08 06:15:43 +00:00
jrandom
a50c73aa5e 2006-04-06 jrandom
* Fix for a bug in the new irc ping/pong filter (thanks Complication!)
2006-04-07 01:26:32 +00:00
jrandom
5aa66795d2 2006-04-06 jrandom
* Fixed a typo in the reply cleanup code
2006-04-06 10:33:44 +00:00
jrandom
ac3c2d2b15 * 2006-04-05 0.6.1.14 released 2006-04-05 17:08:04 +00:00
jrandom
072a45e5ce 2006-04-05 jrandom
* Cut down on the time that we allow a tunnel creation request to sit by
      without response, and reject tunnel creation requests that are lagged
      locally.  Also switch to a bounded FIFO instead of a LIFO
    * Threading tweaks for the message handling (thanks bar!)
    * Don't add addresses to syndie with blank names (thanks Complication!)
    * Further ban clearance
2006-04-05 04:40:00 +00:00
complication
1ab14e52d2 2006-04-04 Complication
* weekly news.xml update
2006-04-05 03:06:00 +00:00
jrandom
9a820961a2 2006-04-05 jrandom
* Fix during the ssu handshake to avoid an unnecessary failure on
      packet retransmission (thanks ripple!)
    * Fix during the SSU handshake to use the negotiated session key asap,
      rather than using the intro key for more than we should (thanks ripple!)
    * Fixes to the message reply registry (thanks Complication!)
    * More comprehensive syndie banning (for repeated pushes)
    * Publish the router's ballpark bandwidth limit (w/in a power of 2), for
      testing purposes
    * Put a floor back on the capacity threshold, so too many failing peers
      won't cause us to pick very bad peers (unless we have very few good
      ones)
    * Bugfix to cut down on peers using introducers unneessarily (thanks
      Complication!)
    * Reduced the default streaming lib message size to fit into a single
      tunnel message, rather than require 5 tunnel messages to be transferred
      without loss before recomposition.  This reduces throughput, but should
      increase reliability, at least for the time being.
    * Misc small bugfixes in the router (thanks all!)
    * More tweaking for Syndie's CSS (thanks Doubtful Salmon!)
2006-04-04 12:20:32 +00:00
jrandom
764149aef3 2006-04-01 jrandom
* Take out the router watchdog's teeth (don't restart on leaseset failure)
    * Filter the IRC ping/pong messages, as some clients send unsafe
      information in them (thanks aardvax and dust!)
2006-04-03 10:07:22 +00:00
jrandom
1b3ad31bff 2006-04-01 jrandom
* Take out the router watchdog's teeth (don't restart on leaseset failure)
2006-04-01 19:05:35 +00:00
jrandom
15e6c27c04 2006-03-30 jrandom
* Substantially reduced the lock contention in the message registry (a
      major hotspot that can choke most threads).  Also reworked the locking
      so we don't need per-message timer events
    * No need to have additional per-peer message clearing, as they are
      either unregistered individually or expired.
    * Include some of the more transient tunnel throttling
2006-03-30 07:26:43 +00:00
complication
8b707e569f 2006-03-28 Complication
* weekly news.xml update
2006-03-29 02:09:23 +00:00
complication
e4c4b24c61 2006-03-26 Complication
* announce 0.6.1.3
2006-03-27 03:24:38 +00:00
jrandom
031636e607 * 2006-03-26 0.6.1.13 released 2006-03-26 23:23:49 +00:00
jrandom
b5c0d77c69 2006-03-25 jrandom
* Added a simple purge and ban of syndie authors, shown as the
      "Purge and ban" button on the addressbook for authors that are already
      on the ignore list.  All of their entries and metadata are deleted from
      the archive, and the are transparently filtered from any remote
      syndication (so no user on the syndie instance will pull any new posts
      from them)
    * More strict tunnel join throtting when congested
2006-03-25 23:50:48 +00:00
jrandom
d489caa88c 2006-03-24 jrandom
* Try to desync tunnel building near startup (thanks Complication!)
    * If we are highly congested, fall back on only querying the floodfill
      netDb peers, and only storing to those peers too
    * Cleaned up the floodfill-only queries
2006-03-24 20:53:28 +00:00
complication
2a24029acf 2006-03-21 Complication
* Weekly news.xml update
2006-03-22 02:15:13 +00:00
jrandom
c5aab8c750 2006-03-21 jrandom
* Avoid a very strange (unconfirmed) bug that people using the systray's
      browser picker dialog could cause by disabling the GUI-based browser
      picker.
    * Cut down on subsequent streaming lib reset packets transmitted
    * Use a larger MTU more often
    * Allow netDb searches to query shitlisted peers, as the queries are
      indirect.
    * Add an option to disable non-floodfill netDb searches (non-floodfill
      searches are used by default, but can be disabled by adding
      netDb.floodfillOnly=true to the advanced config)
2006-03-21 23:11:32 +00:00
jrandom
343748111a 2006-03-20 jrandom
* Fix to allow for some slack when coalescing stats
    * Workaround some oddball errors
2006-03-20 05:39:54 +00:00
jrandom
c5ddfabfe9 2006-03-20 jrandom
* Fix to allow for some slack when coalescing stats
    * Workaround some oddball errors
2006-03-20 05:31:09 +00:00
jrandom
1ef33906ed 2006-03-18 jrandom
* Added a new graphs.jsp page to show all of the stats being harvested
2006-03-19 00:23:23 +00:00
jrandom
f3849a22ad 2006-03-18 jrandom
* Made the netDb search load limitations a little less stringent
    * Add support for specifying the number of periods to be plotted on the
      graphs - e.g. to plot only the last hour of a stat that is averaged at
      the 60 second period, add &periodCount=60
2006-03-18 23:09:35 +00:00
jrandom
b03ff21d3b 2006-03-17 jrandom
* Add support for graphing the event count as well as the average stat
      value (done by adding &showEvents=true to the URL).  Also supports
      hiding the legend (&hideLegend=true), the grid (&hideGrid=true), and
      the title (&hideTitle=true).
    * Removed an unnecessary arbitrary filter on the profile organizer so we
      can pick high capacity and fast peers more appropriately
2006-03-17 23:46:00 +00:00
jrandom
52094b10c9 aych tee emm ell smells 2006-03-16 22:37:57 +00:00
jrandom
fc927efaa3 2006-03-16 jrandom
* Integrate basic hooks for jrobin (http://jrobin.org) into the router
      console.  Selected stats can be harvested automatically and fed into
      in-memory RRD databases, and those databases can be served up either as
      PNG images or as RRDtool compatible XML dumps (see oldstats.jsp for
      details).  A base set of stats are harvested by default, but an
      alternate list can be specified by setting the 'stat.summaries' list on
      the advanced config.  For instance:
      stat.summaries=bw.recvRate.60000,bw.sendRate.60000
    * HTML tweaking for the general config page (thanks void!)
    * Odd NPE fix (thanks Complication!)
2006-03-16 21:52:09 +00:00
jrandom
65dc803fb7 2006-03-16 jrandom
* Integrate basic hooks for jrobin (http://jrobin.org) into the router
      console.  Selected stats can be harvested automatically and fed into
      in-memory RRD databases, and those databases can be served up either as
      PNG images or as RRDtool compatible XML dumps (see oldstats.jsp for
      details).  A base set of stats are harvested by default, but an
      alternate list can be specified by setting the 'stat.summaries' list on
      the advanced config.  For instance:
      stat.summaries=bw.recvRate.60000,bw.sendRate.60000
    * HTML tweaking for the general config page (thanks void!)
    * Odd NPE fix (thanks Complication!)
2006-03-16 21:45:17 +00:00
complication
349adf6690 2006-03-15 Complication
* Trim out an old, inactive IP second-guessing method
      (thanks for spotting, Anonymous!)
2006-03-16 00:49:22 +00:00
jrandom
2c843fd818 2006-03-15 jrandom
* Further stat cleanup
    * Keep track of how many peers we are actively trying to communicate with,
      beyond those who are just trying to communicate with us.
    * Further router tunnel participation throttle revisions to avoid spurious
      rejections
    * Rate stat display cleanup (thanks ripple!)
    * Don't even try to send messages that have been queued too long
2006-03-15 22:36:10 +00:00
jrandom
863b511cde 2006-03-15 jrandom
* Further stat cleanup
    * Keep track of how many peers we are actively trying to communicate with,
      beyond those who are just trying to communicate with us.
    * Further router tunnel participation throttle revisions to avoid spurious
      rejections
    * Rate stat display cleanup (thanks ripple!)
    * Don't even try to send messages that have been queued too long
2006-03-15 22:26:42 +00:00
zzz
c417e7c237 2006-03-14 zzz update 2006-03-15 06:02:07 +00:00
zzz
1822c0d7d8 2006-03-07 zzz update 2006-03-09 02:19:42 +00:00
zzz
94c1c32b51 2006-03-05 zzz
* Remove the +++--- from the logs on i2psnark startup
2006-03-06 01:57:47 +00:00
jrandom
deb35f4af4 2006-03-05 jrandom
* HTML fixes in Syndie to work better with opera (thanks shaklen!)
    * Give netDb lookups to floodfill peers more time, as they are much more
      likely to succeed (thereby cutting down on the unnecessary netDb
      searches outside the floodfill set)
    * Fix to the SSU IP detection code so we won't use introducers when we
      don't need them (thanks Complication!)
    * Add a brief shitlist to i2psnark so it doesn't keep on trying to reach
      peers given to it
    * Don't let netDb searches wander across too many peers
    * Don't use the 1s bandwidth usage in the tunnel participation throttle,
      as its too volatile to have much meaning.
    * Don't bork if a Syndie post is missing an entry.sml
2006-03-05 17:07:07 +00:00
complication
883150f943 2006-03-05 Complication
* Reduce exposed statistical information,
      to make build and uptime tracking more expensive
2006-03-05 07:44:59 +00:00
complication
717d1b97b2 2006-03-04 Complication
* Fix the announce URL of orion's tracker in Snark sources
2006-03-04 23:50:01 +00:00
complication
e62135eacc 2006-03-03 Complication
* Explicit check for an index out of bounds exception while parsing
      an inbound IRC command (implicit check was there already)
2006-03-04 03:04:06 +00:00
jrandom
2c6d953359 2006-03-01 jrandom
* More aggressive tunnel throttling as we approach our bandwidth limit,
      and throttle based off periods wider than 1 second.
    * Included Doubtful Salmon's syndie stylings (thanks!)
2006-03-01 23:01:20 +00:00
zzz
2b79e2df3f 2006-02-28 zzz update 2006-03-01 04:11:16 +00:00
zzz
fab6e421b8 2006-02-27 zzz
* Update error page templates to add \r, Connection: close, and
      Proxy-connection: close.
2006-02-28 03:55:18 +00:00
190 changed files with 11970 additions and 1783 deletions

View File

@@ -21,11 +21,12 @@ NATIVE_DIR=native
# router.jar: full I2P router
# jbigi.jar: collection of native optimized GMP routines for crypto
JAR_BASE=i2p.jar mstreaming.jar streaming.jar
JAR_CLIENTS=i2ptunnel.jar sam.jar i2psnark.jar
JAR_CLIENTS=i2ptunnel.jar sam.jar
JAR_ROUTER=router.jar
JAR_JBIGI=jbigi.jar
JAR_XML=xml-apis.jar resolver.jar xercesImpl.jar
JAR_CONSOLE=\
i2psnark.jar \
javax.servlet.jar \
commons-el.jar \
commons-logging.jar \
@@ -79,15 +80,15 @@ native_clean:
native_shared: libi2p.so
@cd build ; ${GCJ} ${OPTIMIZE} -fjni -L../${NATIVE_DIR} -li2p ${SYSTEM_PROPS} -o ../${NATIVE_DIR}/i2p_dsa --main=net.i2p.crypto.DSAEngine
@echo "* i2p_dsa is a simple test app with the DSA engine and Fortuna PRNG to make sure crypto is working"
@cd build ; ${GCJ} ${OPTIMIZE} -fjni -L../${NATIVE_DIR} -li2p ${SYSTEM_PROPS} -o ../${NATIVE_DIR}/prng --main=gnu.crypto.prng.Fortuna
@cd build ; ${GCJ} ${OPTIMIZE} -fjni -L../${NATIVE_DIR} -li2p ${SYSTEM_PROPS} -o ../${NATIVE_DIR}/prng --main=gnu.crypto.prng.FortunaStandalone
@cd build ; ${GCJ} ${OPTIMIZE} -fjni -L../${NATIVE_DIR} -li2p ${SYSTEM_PROPS} -o ../${NATIVE_DIR}/i2ptunnel --main=net.i2p.i2ptunnel.I2PTunnel
@echo "* i2ptunnel is mihi's I2PTunnel CLI"
@echo " run it as ./i2ptunnel -cli to avoid awt complaints"
@cd build ; ${GCJ} ${OPTIMIZE} -fjni -L../${NATIVE_DIR} -li2p ${SYSTEM_PROPS} -o ../${NATIVE_DIR}/i2ptunnelctl --main=net.i2p.i2ptunnel.TunnelControllerGroup
@echo "* i2ptunnelctl is a controller for I2PTunnel, reading i2ptunnel.config"
@echo " and launching the appropriate proxies"
@cd build ; ${GCJ} ${OPTIMIZE} -fjni -L../${NATIVE_DIR} -li2p ${SYSTEM_PROPS} -o ../${NATIVE_DIR}/i2psnark --main=org.klomp.snark.Snark
@echo "* i2psnark is an anonymous bittorrent client"
#@cd build ; ${GCJ} ${OPTIMIZE} -fjni -L../${NATIVE_DIR} -li2p ${SYSTEM_PROPS} -o ../${NATIVE_DIR}/i2psnark --main=org.klomp.snark.Snark
#@echo "* i2psnark is an anonymous bittorrent client"
@cd build ; ${GCJ} ${OPTIMIZE} -fjni -L../${NATIVE_DIR} -li2p ${SYSTEM_PROPS} -o ../${NATIVE_DIR}/i2prouter --main=net.i2p.router.Router
@echo "* i2prouter is the main I2P router"
@echo " it can be used, and while the router console won't load,"
@@ -95,6 +96,6 @@ native_shared: libi2p.so
libi2p.so:
@echo "* Building libi2p.so"
@(cd build ; ${GCJ} ${OPTIMIZE} -fPIC -fjni -shared -o ../${NATIVE_DIR}/libi2p.so ${LIBI2P_JARS} ; cd .. )
@(cd build ; time ${GCJ} ${OPTIMIZE} -fPIC -fjni -shared -o ../${NATIVE_DIR}/libi2p.so ${LIBI2P_JARS} ; cd .. )
@ls -l ${NATIVE_DIR}/libi2p.so
@echo "* libi2p.so built"

View File

@@ -10,6 +10,7 @@ import net.i2p.client.streaming.I2PSocket;
import net.i2p.client.streaming.I2PSocketManager;
import net.i2p.client.streaming.I2PSocketManagerFactory;
import net.i2p.util.Log;
import net.i2p.util.SimpleTimer;
import java.io.*;
import java.util.*;
@@ -31,6 +32,7 @@ public class I2PSnarkUtil {
private Map _opts;
private I2PSocketManager _manager;
private boolean _configured;
private Set _shitlist;
private I2PSnarkUtil() {
_context = I2PAppContext.getGlobalContext();
@@ -38,6 +40,7 @@ public class I2PSnarkUtil {
_opts = new HashMap();
setProxy("127.0.0.1", 4444);
setI2CPConfig("127.0.0.1", 7654, null);
_shitlist = new HashSet(64);
_configured = false;
}
@@ -110,18 +113,36 @@ public class I2PSnarkUtil {
public void disconnect() {
I2PSocketManager mgr = _manager;
_manager = null;
_shitlist.clear();
mgr.destroySocketManager();
}
/** connect to the given destination */
I2PSocket connect(PeerID peer) throws IOException {
Hash dest = peer.getAddress().calculateHash();
synchronized (_shitlist) {
if (_shitlist.contains(dest))
throw new IOException("Not trying to contact " + dest.toBase64() + ", as they are shitlisted");
}
try {
return _manager.connect(peer.getAddress());
I2PSocket rv = _manager.connect(peer.getAddress());
if (rv != null) synchronized (_shitlist) { _shitlist.remove(dest); }
return rv;
} catch (I2PException ie) {
synchronized (_shitlist) {
_shitlist.add(dest);
}
SimpleTimer.getInstance().addEvent(new Unshitlist(dest), 10*60*1000);
throw new IOException("Unable to reach the peer " + peer + ": " + ie.getMessage());
}
}
private class Unshitlist implements SimpleTimer.TimedEvent {
private Hash _dest;
public Unshitlist(Hash dest) { _dest = dest; }
public void timeReached() { synchronized (_shitlist) { _shitlist.remove(_dest); } }
}
/**
* fetch the given URL, returning the file it is stored in, or null on error
*/

View File

@@ -318,6 +318,14 @@ public class Peer implements Comparable
PeerState s = state;
if (s != null)
{
// try to save partial piece
if (this.deregister) {
PeerListener p = state.listener;
if (p != null) {
p.savePeerPartial(state);
p.markUnrequested(this);
}
}
state = null;
PeerConnectionIn in = s.in;
@@ -458,4 +466,25 @@ public class Peer implements Comparable
return -1; //"no state";
}
}
/**
* Send keepalive
*/
public void keepAlive()
{
PeerState s = state;
if (s != null)
s.keepAlive();
}
/**
* Retransmit outstanding requests if necessary
*/
public void retransmitRequests()
{
PeerState s = state;
if (s != null)
s.retransmitRequests();
}
}

View File

@@ -48,10 +48,7 @@ class PeerCheckerTask extends TimerTask
int peers = 0;
int uploaders = 0;
int downloaders = 0;
int interested = 0;
int interesting = 0;
int choking = 0;
int choked = 0;
int removedCount = 0;
long uploaded = 0;
long downloaded = 0;
@@ -80,17 +77,7 @@ class PeerCheckerTask extends TimerTask
uploaders++;
if (!peer.isChoked() && peer.isInteresting())
downloaders++;
if (peer.isInterested())
interested++;
if (peer.isInteresting())
interesting++;
if (peer.isChoking())
choking++;
if (peer.isChoked())
choked++;
// XXX - We should calculate the up/download rate a bit
// more intelligently
long upload = peer.getUploaded();
uploaded += upload;
long download = peer.getDownloaded();
@@ -113,7 +100,7 @@ class PeerCheckerTask extends TimerTask
// interested peers try to make some room.
// (Note use of coordinator.uploaders)
if (coordinator.uploaders >= PeerCoordinator.MAX_UPLOADERS
&& interested > PeerCoordinator.MAX_UPLOADERS
&& coordinator.interestedAndChoking > 0
&& !peer.isChoking())
{
// Check if it still wants pieces from us.
@@ -130,7 +117,7 @@ class PeerCheckerTask extends TimerTask
it.remove();
removed.add(peer);
}
else if (peer.isChoked())
else if (peer.isInteresting() && peer.isChoked())
{
// If they are choking us make someone else a downloader
if (Snark.debug >= Snark.DEBUG)
@@ -138,6 +125,21 @@ class PeerCheckerTask extends TimerTask
peer.setChoking(true);
uploaders--;
coordinator.uploaders--;
removedCount++;
// Put it at the back of the list
it.remove();
removed.add(peer);
}
else if (!peer.isInteresting() && !coordinator.completed())
{
// If they aren't interesting make someone else a downloader
if (Snark.debug >= Snark.DEBUG)
Snark.debug("Choke uninteresting peer: " + peer, Snark.DEBUG);
peer.setChoking(true);
uploaders--;
coordinator.uploaders--;
removedCount++;
// Put it at the back of the list
it.remove();
@@ -154,27 +156,37 @@ class PeerCheckerTask extends TimerTask
peer.setChoking(true);
uploaders--;
coordinator.uploaders--;
removedCount++;
// Put it at the back of the list
it.remove();
removed.add(peer);
}
else if (!peer.isChoking() && download < worstdownload)
else if (peer.isInteresting() && !peer.isChoked() &&
download < worstdownload)
{
// Make sure download is good if we are uploading
worstdownload = download;
worstDownloader = peer;
}
else if (upload < worstdownload && coordinator.completed())
{
// Make sure upload is good if we are seeding
worstdownload = upload;
worstDownloader = peer;
}
}
peer.retransmitRequests();
peer.keepAlive();
}
// Resync actual uploaders value
// (can shift a bit by disconnecting peers)
coordinator.uploaders = uploaders;
// Remove the worst downloader if needed.
// Remove the worst downloader if needed. (uploader if seeding)
if (uploaders >= PeerCoordinator.MAX_UPLOADERS
&& interested > PeerCoordinator.MAX_UPLOADERS
&& coordinator.interestedAndChoking > 0
&& worstDownloader != null)
{
if (Snark.debug >= Snark.DEBUG)
@@ -183,6 +195,7 @@ class PeerCheckerTask extends TimerTask
worstDownloader.setChoking(true);
coordinator.uploaders--;
removedCount++;
// Put it at the back of the list
coordinator.peers.remove(worstDownloader);
@@ -196,6 +209,11 @@ class PeerCheckerTask extends TimerTask
// Put peers back at the end of the list that we removed earlier.
coordinator.peers.addAll(removed);
coordinator.peerCount = coordinator.peers.size();
coordinator.interestedAndChoking += removedCount;
// store the rates
coordinator.setRateHistory(uploaded, downloaded);
}
if (coordinator.halted()) {
cancel();

View File

@@ -33,7 +33,7 @@ class PeerConnectionIn implements Runnable
private final DataInputStream din;
private Thread thread;
private boolean quit;
private volatile boolean quit;
public PeerConnectionIn(Peer peer, DataInputStream din)
{
@@ -51,6 +51,13 @@ class PeerConnectionIn implements Runnable
Thread t = thread;
if (t != null)
t.interrupt();
if (din != null) {
try {
din.close();
} catch (IOException ioe) {
_log.warn("Error closing the stream from " + peer, ioe);
}
}
}
public void run()

View File

@@ -72,6 +72,16 @@ class PeerConnectionOut implements Runnable
{
Message m = null;
PeerState state = null;
boolean shouldFlush;
synchronized(sendQueue)
{
shouldFlush = !quit && peer.isConnected() && sendQueue.isEmpty();
}
if (shouldFlush)
// Make sure everything will reach the other side.
// flush while not holding lock, could take a long time
dout.flush();
synchronized(sendQueue)
{
while (!quit && peer.isConnected() && sendQueue.isEmpty())
@@ -79,7 +89,8 @@ class PeerConnectionOut implements Runnable
try
{
// Make sure everything will reach the other side.
dout.flush();
// don't flush while holding lock, could take a long time
// dout.flush();
// Wait till more data arrives.
sendQueue.wait(60*1000);
@@ -259,7 +270,13 @@ class PeerConnectionOut implements Runnable
{
Message m = new Message();
m.type = Message.KEEP_ALIVE;
addMessage(m);
// addMessage(m);
synchronized(sendQueue)
{
if(sendQueue.isEmpty())
sendQueue.add(m);
sendQueue.notifyAll();
}
}
void sendChoke(boolean choke)
@@ -318,6 +335,23 @@ class PeerConnectionOut implements Runnable
addMessage(m);
}
/** reransmit requests not received in 7m */
private static final int REQ_TIMEOUT = (2 * SEND_TIMEOUT) + (60 * 1000);
void retransmitRequests(List requests)
{
long now = System.currentTimeMillis();
Iterator it = requests.iterator();
while (it.hasNext())
{
Request req = (Request)it.next();
if(now > req.sendTime + REQ_TIMEOUT) {
if (_log.shouldLog(Log.DEBUG))
_log.debug("Retransmit request " + req + " to peer " + peer);
sendRequest(req);
}
}
}
void sendRequests(List requests)
{
Iterator it = requests.iterator();
@@ -330,12 +364,30 @@ class PeerConnectionOut implements Runnable
void sendRequest(Request req)
{
// Check for duplicate requests to deal with fibrillating i2p-bt
// (multiple choke/unchokes received cause duplicate requests in the queue)
synchronized(sendQueue)
{
Iterator it = sendQueue.iterator();
while (it.hasNext())
{
Message m = (Message)it.next();
if (m.type == Message.REQUEST && m.piece == req.piece &&
m.begin == req.off && m.length == req.len)
{
if (_log.shouldLog(Log.DEBUG))
_log.debug("Discarding duplicate request " + req + " to peer " + peer);
return;
}
}
}
Message m = new Message();
m.type = Message.REQUEST;
m.piece = req.piece;
m.begin = req.off;
m.length = req.len;
addMessage(m);
req.sendTime = System.currentTimeMillis();
}
void sendPiece(int piece, int begin, int length, byte[] bytes)
@@ -346,7 +398,7 @@ class PeerConnectionOut implements Runnable
m.begin = begin;
m.length = length;
m.data = bytes;
m.off = begin;
m.off = 0;
m.len = length;
addMessage(m);
}

View File

@@ -37,19 +37,23 @@ public class PeerCoordinator implements PeerListener
final Snark snark;
// package local for access by CheckDownLoadersTask
final static long CHECK_PERIOD = 20*1000; // 20 seconds
final static long CHECK_PERIOD = 40*1000; // 40 seconds
final static int MAX_CONNECTIONS = 24;
final static int MAX_UPLOADERS = 12; // i2p: might as well balance it out
final static int MAX_UPLOADERS = 4;
// Approximation of the number of current uploaders.
// Resynced by PeerChecker once in a while.
int uploaders = 0;
int interestedAndChoking = 0;
// final static int MAX_DOWNLOADERS = MAX_CONNECTIONS;
// int downloaders = 0;
private long uploaded;
private long downloaded;
final static int RATE_DEPTH = 6; // make following arrays RATE_DEPTH long
private long uploaded_old[] = {0,0,0,0,0,0};
private long downloaded_old[] = {0,0,0,0,0,0};
// synchronize on this when changing peers or downloaders
final List peers = new ArrayList();
@@ -62,7 +66,7 @@ public class PeerCoordinator implements PeerListener
private final byte[] id;
// Some random wanted pieces
private final List wantedPieces;
private List wantedPieces;
private boolean halted = false;
@@ -80,6 +84,15 @@ public class PeerCoordinator implements PeerListener
this.listener = listener;
this.snark = torrent;
setWantedPieces();
// Install a timer to check the uploaders.
timer.schedule(new PeerCheckerTask(this), CHECK_PERIOD, CHECK_PERIOD);
}
// only called externally from Storage after the double-check fails
public void setWantedPieces()
{
// Make a list of pieces
wantedPieces = new ArrayList();
BitField bitfield = storage.getBitField();
@@ -87,11 +100,8 @@ public class PeerCoordinator implements PeerListener
if (!bitfield.get(i))
wantedPieces.add(new Piece(i));
Collections.shuffle(wantedPieces);
// Install a timer to check the uploaders.
timer.schedule(new PeerCheckerTask(this), CHECK_PERIOD, CHECK_PERIOD);
}
public Storage getStorage() { return storage; }
public CoordinatorListener getListener() { return listener; }
@@ -123,7 +133,7 @@ public class PeerCoordinator implements PeerListener
public long getLeft()
{
// XXX - Only an approximation.
return storage.needed() * metainfo.getPieceLength(0);
return ((long) storage.needed()) * metainfo.getPieceLength(0);
}
/**
@@ -142,6 +152,43 @@ public class PeerCoordinator implements PeerListener
return downloaded;
}
/**
* Push the total uploaded/downloaded onto a RATE_DEPTH deep stack
*/
public void setRateHistory(long up, long down)
{
for (int i = RATE_DEPTH-1; i > 0; i--){
uploaded_old[i] = uploaded_old[i-1];
downloaded_old[i] = downloaded_old[i-1];
}
uploaded_old[0] = up;
downloaded_old[0] = down;
}
/**
* Returns the 4-minute-average rate in Bps
*/
public long getDownloadRate()
{
long rate = 0;
for (int i = 0; i < RATE_DEPTH; i++){
rate += downloaded_old[i];
}
return rate / (RATE_DEPTH * CHECK_PERIOD / 1000);
}
/**
* Returns the 4-minute-average rate in Bps
*/
public long getUploadRate()
{
long rate = 0;
for (int i = 0; i < RATE_DEPTH; i++){
rate += uploaded_old[i];
}
return rate / (RATE_DEPTH * CHECK_PERIOD / 1000);
}
public MetaInfo getMetaInfo()
{
return metainfo;
@@ -191,8 +238,10 @@ public class PeerCoordinator implements PeerListener
synchronized(peers)
{
Peer old = peerIDInList(peer.getPeerID(), peers);
if ( (old != null) && (old.getInactiveTime() > 2*60*1000) ) {
// idle for 2 minutes, kill the old con
if ( (old != null) && (old.getInactiveTime() > 4*60*1000) ) {
// idle for 4 minutes, kill the old con (64KB/4min = 273B/sec minimum for one block)
if (_log.shouldLog(Log.WARN))
_log.warn("Remomving old peer: " + peer + ": " + old + ", inactive for " + old.getInactiveTime());
peers.remove(old);
toDisconnect = old;
old = null;
@@ -235,12 +284,13 @@ public class PeerCoordinator implements PeerListener
return null;
}
public void addPeer(final Peer peer)
// returns true if actual attempt to add peer occurs
public boolean addPeer(final Peer peer)
{
if (halted)
{
peer.disconnect(false);
return;
return false;
}
boolean need_more;
@@ -265,6 +315,7 @@ public class PeerCoordinator implements PeerListener
};
String threadName = peer.toString();
new I2PThread(r, threadName).start();
return true;
}
else
if (_log.shouldLog(Log.DEBUG)) {
@@ -274,6 +325,7 @@ public class PeerCoordinator implements PeerListener
_log.info("MAX_CONNECTIONS = " + MAX_CONNECTIONS
+ " not accepting extra peer: " + peer);
}
return false;
}
@@ -285,19 +337,23 @@ public class PeerCoordinator implements PeerListener
// other peer that are interested, but are choking us.
List interested = new LinkedList();
synchronized (peers) {
int count = 0;
int unchokedCount = 0;
Iterator it = peers.iterator();
while (it.hasNext())
{
Peer peer = (Peer)it.next();
boolean remove = false;
if (uploaders < MAX_UPLOADERS
&& peer.isChoking()
&& peer.isInterested())
if (peer.isChoking() && peer.isInterested())
{
if (!peer.isChoked())
interested.add(0, peer);
else
interested.add(peer);
count++;
if (uploaders < MAX_UPLOADERS)
{
if (!peer.isChoked())
interested.add(unchokedCount++, peer);
else
interested.add(peer);
}
}
}
@@ -308,11 +364,13 @@ public class PeerCoordinator implements PeerListener
_log.debug("Unchoke: " + peer);
peer.setChoking(false);
uploaders++;
count--;
// Put peer back at the end of the list.
peers.remove(peer);
peers.add(peer);
peerCount = peers.size();
}
interestedAndChoking = count;
}
}
@@ -351,9 +409,10 @@ public class PeerCoordinator implements PeerListener
{
Piece p = (Piece)it.next();
int i = p.getId();
if (bitfield.get(i))
if (bitfield.get(i)) {
p.addPeer(peer);
return true;
}
}
}
return false;
@@ -392,6 +451,8 @@ public class PeerCoordinator implements PeerListener
//Only request a piece we've requested before if there's no other choice.
if (piece == null) {
// let's not all get on the same piece
Collections.shuffle(requested);
Iterator it2 = requested.iterator();
while (piece == null && it2.hasNext())
{
@@ -403,9 +464,17 @@ public class PeerCoordinator implements PeerListener
}
if (piece == null) {
if (_log.shouldLog(Log.WARN))
_log.warn("nothing to even rerequest from " + peer + ": requested = " + requested
+ " wanted = " + wantedPieces + " peerHas = " + havePieces);
_log.warn("nothing to even rerequest from " + peer + ": requested = " + requested);
// _log.warn("nothing to even rerequest from " + peer + ": requested = " + requested
// + " wanted = " + wantedPieces + " peerHas = " + havePieces);
return -1; //If we still can't find a piece we want, so be it.
} else {
// Should be a lot smarter here - limit # of parallel attempts and
// share blocks rather than starting from 0 with each peer.
// This is where the flaws of the snark data model are really exposed.
// Could also randomize within the duplicate set rather than strict rarest-first
if (_log.shouldLog(Log.DEBUG))
_log.debug("parallel request (end game?) for " + peer + ": piece = " + piece);
}
}
piece.setRequested(true);
@@ -417,14 +486,14 @@ public class PeerCoordinator implements PeerListener
* Returns a byte array containing the requested piece or null of
* the piece is unknown.
*/
public byte[] gotRequest(Peer peer, int piece)
public byte[] gotRequest(Peer peer, int piece, int off, int len)
{
if (halted)
return null;
try
{
return storage.getPiece(piece);
return storage.getPiece(piece, off, len);
}
catch (IOException ioe)
{
@@ -582,4 +651,124 @@ public class PeerCoordinator implements PeerListener
}
}
}
/** Simple method to save a partial piece on peer disconnection
* and hopefully restart it later.
* Only one partial piece is saved at a time.
* Replace it if a new one is bigger or the old one is too old.
* Storage method is private so we can expand to save multiple partials
* if we wish.
*/
private Request savedRequest = null;
private long savedRequestTime = 0;
public void savePeerPartial(PeerState state)
{
Request req = state.getPartialRequest();
if (req == null)
return;
if (savedRequest == null ||
req.off > savedRequest.off ||
System.currentTimeMillis() > savedRequestTime + (15 * 60 * 1000)) {
if (savedRequest == null || (req.piece != savedRequest.piece && req.off != savedRequest.off)) {
if (_log.shouldLog(Log.DEBUG)) {
_log.debug(" Saving orphaned partial piece " + req);
if (savedRequest != null)
_log.debug(" (Discarding previously saved orphan) " + savedRequest);
}
}
savedRequest = req;
savedRequestTime = System.currentTimeMillis();
} else {
if (req.piece != savedRequest.piece)
if (_log.shouldLog(Log.DEBUG))
_log.debug(" Discarding orphaned partial piece " + req);
}
}
/** Return partial piece if it's still wanted and peer has it.
*/
public Request getPeerPartial(BitField havePieces) {
if (savedRequest == null)
return null;
if (! havePieces.get(savedRequest.piece)) {
if (_log.shouldLog(Log.DEBUG))
_log.debug("Peer doesn't have orphaned piece " + savedRequest);
return null;
}
synchronized(wantedPieces)
{
for(Iterator iter = wantedPieces.iterator(); iter.hasNext(); ) {
Piece piece = (Piece)iter.next();
if (piece.getId() == savedRequest.piece) {
Request req = savedRequest;
piece.setRequested(true);
if (_log.shouldLog(Log.DEBUG))
_log.debug("Restoring orphaned partial piece " + req);
savedRequest = null;
return req;
}
}
}
if (_log.shouldLog(Log.DEBUG))
_log.debug("We no longer want orphaned piece " + savedRequest);
savedRequest = null;
return null;
}
/** Clear the requested flag for a piece if the peer
** is the only one requesting it
*/
private void markUnrequestedIfOnlyOne(Peer peer, int piece)
{
// see if anybody else is requesting
synchronized (peers)
{
Iterator it = peers.iterator();
while (it.hasNext()) {
Peer p = (Peer)it.next();
if (p.equals(peer))
continue;
if (p.state == null)
continue;
int[] arr = p.state.getRequestedPieces();
for (int i = 0; arr[i] >= 0; i++)
if(arr[i] == piece) {
if (_log.shouldLog(Log.DEBUG))
_log.debug("Another peer is requesting piece " + piece);
return;
}
}
}
// nobody is, so mark unrequested
synchronized(wantedPieces)
{
Iterator it = wantedPieces.iterator();
while (it.hasNext()) {
Piece p = (Piece)it.next();
if (p.getId() == piece) {
p.setRequested(false);
if (_log.shouldLog(Log.DEBUG))
_log.debug("Removing from request list piece " + piece);
return;
}
}
}
}
/** Mark a peer's requested pieces unrequested when it is disconnected
** Once for each piece
** This is enough trouble, maybe would be easier just to regenerate
** the requested list from scratch instead.
*/
public void markUnrequested(Peer peer)
{
if (peer.state == null)
return;
int[] arr = peer.state.getRequestedPieces();
for (int i = 0; arr[i] >= 0; i++)
markUnrequestedIfOnlyOne(peer, arr[i]);
}
}

View File

@@ -107,11 +107,13 @@ public interface PeerListener
*
* @param peer the Peer that wants the piece.
* @param piece the piece number requested.
* @param off byte offset into the piece.
* @param len length of the chunk requested.
*
* @return a byte array containing the piece or null when the piece
* is not available (which is a protocol error).
*/
byte[] gotRequest(Peer peer, int piece);
byte[] gotRequest(Peer peer, int piece, int off, int len);
/**
* Called when a (partial) piece has been downloaded from the peer.
@@ -142,4 +144,29 @@ public interface PeerListener
* we are no longer interested in the peer.
*/
int wantPiece(Peer peer, BitField bitfield);
/**
* Called when the peer has disconnected and the peer task may have a partially
* downloaded piece that the PeerCoordinator can save
*
* @param state the PeerState for the peer
*/
void savePeerPartial(PeerState state);
/**
* Called when a peer has connected and there may be a partially
* downloaded piece that the coordinatorator can give the peer task
*
* @param havePieces the have-pieces bitmask for the peer
*
* @return request (contains the partial data and valid length)
*/
Request getPeerPartial(BitField havePieces);
/** Mark a peer's requested pieces unrequested when it is disconnected
* This prevents premature end game
*
* @param peer the peer that is disconnecting
*/
void markUnrequested(Peer peer);
}

View File

@@ -62,7 +62,7 @@ class PeerState
// If we have te resend outstanding requests (true after we got choked).
private boolean resend = false;
private final static int MAX_PIPELINE = 1;
private final static int MAX_PIPELINE = 2;
private final static int PARTSIZE = 64*1024; // default was 16K, i2p-bt uses 64KB
PeerState(Peer peer, PeerListener listener, MetaInfo metainfo,
@@ -184,7 +184,7 @@ class PeerState
return;
}
byte[] pieceBytes = listener.gotRequest(peer, piece);
byte[] pieceBytes = listener.gotRequest(peer, piece, begin, length);
if (pieceBytes == null)
{
// XXX - Protocol error-> diconnect?
@@ -194,7 +194,7 @@ class PeerState
}
// More sanity checks
if (begin >= pieceBytes.length || begin + length > pieceBytes.length)
if (length != pieceBytes.length)
{
// XXX - Protocol error-> disconnect?
if (_log.shouldLog(Log.WARN))
@@ -221,6 +221,10 @@ class PeerState
listener.uploaded(peer, size);
}
// This is used to flag that we have to back up from the firstOutstandingRequest
// when calculating how far we've gotten
Request pendingRequest = null;
/**
* Called when a partial piece request has been handled by
* PeerConnectionIn.
@@ -231,6 +235,8 @@ class PeerState
downloaded += size;
listener.downloaded(peer, size);
pendingRequest = null;
// Last chunk needed for this piece?
if (getFirstOutstandingRequest(req.piece) == -1)
{
@@ -318,14 +324,8 @@ class PeerState
{
Request dropReq = (Request)outstandingRequests.remove(0);
outstandingRequests.add(dropReq);
// We used to rerequest the missing chunks but that mostly
// just confuses the other side. So now we just keep
// waiting for them. They will be rerequested when we get
// choked/unchoked again.
/*
if (!choked)
if (!choked)
out.sendRequest(dropReq);
*/
if (_log.shouldLog(Log.WARN))
_log.warn("dropped " + dropReq + " with peer " + peer);
}
@@ -336,10 +336,58 @@ class PeerState
// Request more if necessary to keep the pipeline filled.
addRequest();
pendingRequest = req;
return req;
}
// get longest partial piece
Request getPartialRequest()
{
Request req = null;
for (int i = 0; i < outstandingRequests.size(); i++) {
Request r1 = (Request)outstandingRequests.get(i);
int j = getFirstOutstandingRequest(r1.piece);
if (j == -1)
continue;
Request r2 = (Request)outstandingRequests.get(j);
if (r2.off > 0 && ((req == null) || (r2.off > req.off)))
req = r2;
}
if (pendingRequest != null && req != null && pendingRequest.off < req.off) {
if (pendingRequest.off != 0)
req = pendingRequest;
else
req = null;
}
return req;
}
// return array of pieces terminated by -1
// remove most duplicates
// but still could be some duplicates, not guaranteed
int[] getRequestedPieces()
{
int size = outstandingRequests.size();
int[] arr = new int[size+2];
int pc = -1;
int pos = 0;
if (pendingRequest != null) {
pc = pendingRequest.piece;
arr[pos++] = pc;
}
Request req = null;
for (int i = 0; i < size; i++) {
Request r1 = (Request)outstandingRequests.get(i);
if (pc != r1.piece) {
pc = r1.piece;
arr[pos++] = pc;
}
}
arr[pos] = -1;
return(arr);
}
void cancelMessage(int piece, int begin, int length)
{
if (_log.shouldLog(Log.DEBUG))
@@ -414,16 +462,12 @@ class PeerState
/**
* Adds a new request to the outstanding requests list.
*/
private void addRequest()
synchronized private void addRequest()
{
boolean more_pieces = true;
while (more_pieces)
{
synchronized(this)
{
more_pieces = outstandingRequests.size() < MAX_PIPELINE;
}
more_pieces = outstandingRequests.size() < MAX_PIPELINE;
// We want something and we don't have outstanding requests?
if (more_pieces && lastRequest == null)
more_pieces = requestNextPiece();
@@ -431,19 +475,14 @@ class PeerState
{
int pieceLength;
boolean isLastChunk;
synchronized(this)
{
pieceLength = metainfo.getPieceLength(lastRequest.piece);
isLastChunk = lastRequest.off + lastRequest.len == pieceLength;
}
pieceLength = metainfo.getPieceLength(lastRequest.piece);
isLastChunk = lastRequest.off + lastRequest.len == pieceLength;
// Last part of a piece?
if (isLastChunk)
more_pieces = requestNextPiece();
else
{
synchronized(this)
{
int nextPiece = lastRequest.piece;
int nextBegin = lastRequest.off + PARTSIZE;
byte[] bs = lastRequest.bs;
@@ -456,7 +495,6 @@ class PeerState
if (!choked)
out.sendRequest(req);
lastRequest = req;
}
}
}
}
@@ -472,16 +510,41 @@ class PeerState
// Check that we already know what the other side has.
if (bitfield != null)
{
// Check for adopting an orphaned partial piece
Request r = listener.getPeerPartial(bitfield);
if (r != null) {
// Check that r not already in outstandingRequests
int[] arr = getRequestedPieces();
boolean found = false;
for (int i = 0; arr[i] >= 0; i++) {
if (arr[i] == r.piece) {
found = true;
break;
}
}
if (!found) {
outstandingRequests.add(r);
if (!choked)
out.sendRequest(r);
lastRequest = r;
return true;
}
}
int nextPiece = listener.wantPiece(peer, bitfield);
if (_log.shouldLog(Log.DEBUG))
_log.debug(peer + " want piece " + nextPiece);
synchronized(this)
{
if (nextPiece != -1
&& (lastRequest == null || lastRequest.piece != nextPiece))
{
int piece_length = metainfo.getPieceLength(nextPiece);
byte[] bs = new byte[piece_length];
//Catch a common place for OOMs esp. on 1MB pieces
byte[] bs;
try {
bs = new byte[piece_length];
} catch (OutOfMemoryError oom) {
_log.warn("Out of memory, can't request piece " + nextPiece, oom);
return false;
}
int length = Math.min(piece_length, PARTSIZE);
Request req = new Request(nextPiece, bs, 0, length);
@@ -491,7 +554,6 @@ class PeerState
lastRequest = req;
return true;
}
}
}
return false;
@@ -523,4 +585,15 @@ class PeerState
out.sendChoke(choke);
}
}
void keepAlive()
{
out.sendAlive();
}
synchronized void retransmitRequests()
{
if (interesting && !choked)
out.retransmitRequests(outstandingRequests);
}
}

View File

@@ -29,6 +29,7 @@ class Request
final byte[] bs;
final int off;
final int len;
long sendTime;
/**
* Creates a new Request.

View File

@@ -234,6 +234,7 @@ public class Snark
public String rootDataDir = ".";
public CompleteListener completeListener;
public boolean stopped;
byte[] id;
Snark(String torrent, String ip, int user_port,
StorageListener slistener, CoordinatorListener clistener) {
@@ -268,7 +269,7 @@ public class Snark
// zeros bytes, then three bytes filled with snark and then
// sixteen random bytes.
byte snark = (((3 + 7 + 10) * (1000 - 8)) / 992) - 17;
byte[] id = new byte[20];
id = new byte[20];
Random random = new Random();
int i;
for (i = 0; i < 9; i++)
@@ -283,6 +284,11 @@ public class Snark
int port;
IOException lastException = null;
/*
* Don't start a tunnel if the torrent isn't going to be started.
* If we are starting,
* startTorrent() will call trackerclient.start() which will force a connect.
*
boolean ok = I2PSnarkUtil.instance().connect();
if (!ok) fatal("Unable to connect to I2P");
I2PServerSocket serversocket = I2PSnarkUtil.instance().getServerSocket();
@@ -292,6 +298,7 @@ public class Snark
Destination d = serversocket.getManager().getSession().getMyDestination();
debug("Listening on I2P destination " + d.toBase64() + " / " + d.calculateHash().toBase64(), NOTICE);
}
*/
// Figure out what the torrent argument represents.
meta = null;
@@ -371,14 +378,19 @@ public class Snark
}
}
/*
* see comment above
*
activity = "Collecting pieces";
coordinator = new PeerCoordinator(id, meta, storage, clistener, this);
PeerCoordinatorSet set = PeerCoordinatorSet.instance();
set.add(coordinator);
ConnectionAcceptor acceptor = ConnectionAcceptor.instance();
acceptor.startAccepting(set, serversocket);
trackerclient = new TrackerClient(meta, coordinator);
*/
if (start)
startTorrent();
}
@@ -386,6 +398,26 @@ public class Snark
* Start up contacting peers and querying the tracker
*/
public void startTorrent() {
boolean ok = I2PSnarkUtil.instance().connect();
if (!ok) fatal("Unable to connect to I2P");
if (coordinator == null) {
I2PServerSocket serversocket = I2PSnarkUtil.instance().getServerSocket();
if (serversocket == null)
fatal("Unable to listen for I2P connections");
else {
Destination d = serversocket.getManager().getSession().getMyDestination();
debug("Listening on I2P destination " + d.toBase64() + " / " + d.calculateHash().toBase64(), NOTICE);
}
debug("Starting PeerCoordinator, ConnectionAcceptor, and TrackerClient", NOTICE);
activity = "Collecting pieces";
coordinator = new PeerCoordinator(id, meta, storage, this, this);
PeerCoordinatorSet set = PeerCoordinatorSet.instance();
set.add(coordinator);
ConnectionAcceptor acceptor = ConnectionAcceptor.instance();
acceptor.startAccepting(set, serversocket);
trackerclient = new TrackerClient(meta, coordinator);
}
stopped = false;
boolean coordinatorChanged = false;
if (coordinator.halted()) {
@@ -402,6 +434,17 @@ public class Snark
if (!trackerclient.started() && !coordinatorChanged) {
trackerclient.start();
} else if (trackerclient.halted() || coordinatorChanged) {
try
{
storage.reopen(rootDataDir);
}
catch (IOException ioe)
{
try { storage.close(); } catch (IOException ioee) {
ioee.printStackTrace();
}
fatal("Could not reopen storage", ioe);
}
TrackerClient newClient = new TrackerClient(coordinator.getMetaInfo(), coordinator);
if (!trackerclient.halted())
trackerclient.halt();
@@ -633,11 +676,11 @@ public class Snark
boolean allocating = false;
public void storageCreateFile(Storage storage, String name, long length)
{
if (allocating)
System.out.println(); // Done with last file.
//if (allocating)
// System.out.println(); // Done with last file.
System.out.print("Creating file '" + name
+ "' of length " + length + ": ");
//System.out.print("Creating file '" + name
// + "' of length " + length + ": ");
allocating = true;
}
@@ -647,10 +690,10 @@ public class Snark
public void storageAllocated(Storage storage, long length)
{
allocating = true;
System.out.print(".");
//System.out.print(".");
allocated += length;
if (allocated == meta.getTotalLength())
System.out.println(); // We have all the disk space we need.
//if (allocated == meta.getTotalLength())
// System.out.println(); // We have all the disk space we need.
}
boolean allChecked = false;
@@ -664,26 +707,21 @@ public class Snark
// Use the MetaInfo from the storage since our own might not
// yet be setup correctly.
MetaInfo meta = storage.getMetaInfo();
if (meta != null)
System.out.print("Checking existing "
+ meta.getPieces()
+ " pieces: ");
//if (meta != null)
// System.out.print("Checking existing "
// + meta.getPieces()
// + " pieces: ");
checking = true;
}
if (checking)
if (checked)
System.out.print("+");
else
System.out.print("-");
else
if (!checking)
Snark.debug("Got " + (checked ? "" : "BAD ") + "piece: " + num,
Snark.INFO);
}
public void storageAllChecked(Storage storage)
{
if (checking)
System.out.println();
//if (checking)
// System.out.println();
allChecked = true;
checking = false;
@@ -693,11 +731,16 @@ public class Snark
{
Snark.debug("Completely received " + torrent, Snark.INFO);
//storage.close();
System.out.println("Completely received: " + torrent);
//System.out.println("Completely received: " + torrent);
if (completeListener != null)
completeListener.torrentComplete(this);
}
public void setWantedPieces(Storage storage)
{
coordinator.setWantedPieces();
}
public void shutdown()
{
// Should not be necessary since all non-deamon threads should

View File

@@ -16,6 +16,7 @@ public class SnarkManager implements Snark.CompleteListener {
/** map of (canonical) filename to Snark instance (unsynchronized) */
private Map _snarks;
private Object _addSnarkLock;
private String _configFile;
private Properties _config;
private I2PAppContext _context;
@@ -34,12 +35,13 @@ public class SnarkManager implements Snark.CompleteListener {
private SnarkManager() {
_snarks = new HashMap();
_addSnarkLock = new Object();
_context = I2PAppContext.getGlobalContext();
_log = _context.logManager().getLog(SnarkManager.class);
_messages = new ArrayList(16);
loadConfig("i2psnark.config");
int minutes = getStartupDelayMinutes();
_messages.add("Starting up torrents in " + minutes + (minutes == 1 ? " minute" : " minutes"));
_messages.add("Adding torrents in " + minutes + (minutes == 1 ? " minute" : " minutes"));
I2PThread monitor = new I2PThread(new DirMonitor(), "Snark DirMonitor");
monitor.setDaemon(true);
monitor.start();
@@ -91,6 +93,8 @@ public class SnarkManager implements Snark.CompleteListener {
_config.setProperty(PROP_I2CP_HOST, "localhost");
if (!_config.containsKey(PROP_I2CP_PORT))
_config.setProperty(PROP_I2CP_PORT, "7654");
if (!_config.containsKey(PROP_I2CP_OPTS))
_config.setProperty(PROP_I2CP_OPTS, "inbound.length=1 inbound.lengthVariance=1 outbound.length=1 outbound.lengthVariance=1");
if (!_config.containsKey(PROP_EEP_HOST))
_config.setProperty(PROP_EEP_HOST, "localhost");
if (!_config.containsKey(PROP_EEP_PORT))
@@ -267,7 +271,7 @@ public class SnarkManager implements Snark.CompleteListener {
public Snark getTorrent(String filename) { synchronized (_snarks) { return (Snark)_snarks.get(filename); } }
public void addTorrent(String filename) { addTorrent(filename, false); }
public void addTorrent(String filename, boolean dontAutoStart) {
if (!I2PSnarkUtil.instance().connected()) {
if ((!dontAutoStart) && !I2PSnarkUtil.instance().connected()) {
addMessage("Connecting to I2P");
boolean ok = I2PSnarkUtil.instance().connect();
if (!ok) {
@@ -287,7 +291,16 @@ public class SnarkManager implements Snark.CompleteListener {
Snark torrent = null;
synchronized (_snarks) {
torrent = (Snark)_snarks.get(filename);
if (torrent == null) {
}
// don't hold the _snarks lock while verifying the torrent
if (torrent == null) {
synchronized (_addSnarkLock) {
// double-check
synchronized (_snarks) {
if(_snarks.get(filename) != null)
return;
}
FileInputStream fis = null;
try {
fis = new FileInputStream(sfile);
@@ -303,7 +316,9 @@ public class SnarkManager implements Snark.CompleteListener {
} else {
torrent = new Snark(filename, null, -1, null, null, false, dataDir.getPath());
torrent.completeListener = this;
_snarks.put(filename, torrent);
synchronized (_snarks) {
_snarks.put(filename, torrent);
}
}
} catch (IOException ioe) {
addMessage("Torrent in " + sfile.getName() + " is invalid: " + ioe.getMessage());
@@ -313,9 +328,9 @@ public class SnarkManager implements Snark.CompleteListener {
} finally {
if (fis != null) try { fis.close(); } catch (IOException ioe) {}
}
} else {
return;
}
} else {
return;
}
// ok, snark created, now lets start it up or configure it further
File f = new File(filename);
@@ -333,7 +348,7 @@ public class SnarkManager implements Snark.CompleteListener {
return "Too many files in " + info.getName() + " (" + files.size() + "), deleting it";
} else if (info.getPieces() <= 0) {
return "No pieces in " + info.getName() + "? deleting it";
} else if (info.getPieceLength(0) > 10*1024*1024) {
} else if (info.getPieceLength(0) > 1*1024*1024) {
return "Pieces are too large in " + info.getName() + " (" + info.getPieceLength(0)/1024 + "KB, deleting it";
} else if (info.getTotalLength() > 10*1024*1024*1024l) {
System.out.println("torrent info: " + info.toString());
@@ -444,10 +459,9 @@ public class SnarkManager implements Snark.CompleteListener {
if (existingNames.contains(foundNames.get(i))) {
// already known. noop
} else {
if (I2PSnarkUtil.instance().connect())
addTorrent((String)foundNames.get(i));
else
if (shouldAutoStart() && !I2PSnarkUtil.instance().connect())
addMessage("Unable to connect to I2P");
addTorrent((String)foundNames.get(i), !shouldAutoStart());
}
}
// now lets see which ones have been removed...
@@ -464,7 +478,10 @@ public class SnarkManager implements Snark.CompleteListener {
private static final String DEFAULT_TRACKERS[] = {
"Postman's tracker", "http://YRgrgTLGnbTq2aZOZDJQ~o6Uk5k6TK-OZtx0St9pb0G-5EGYURZioxqYG8AQt~LgyyI~NCj6aYWpPO-150RcEvsfgXLR~CxkkZcVpgt6pns8SRc3Bi-QSAkXpJtloapRGcQfzTtwllokbdC-aMGpeDOjYLd8b5V9Im8wdCHYy7LRFxhEtGb~RL55DA8aYOgEXcTpr6RPPywbV~Qf3q5UK55el6Kex-6VCxreUnPEe4hmTAbqZNR7Fm0hpCiHKGoToRcygafpFqDw5frLXToYiqs9d4liyVB-BcOb0ihORbo0nS3CLmAwZGvdAP8BZ7cIYE3Z9IU9D1G8JCMxWarfKX1pix~6pIA-sp1gKlL1HhYhPMxwyxvuSqx34o3BqU7vdTYwWiLpGM~zU1~j9rHL7x60pVuYaXcFQDR4-QVy26b6Pt6BlAZoFmHhPcAuWfu-SFhjyZYsqzmEmHeYdAwa~HojSbofg0TMUgESRXMw6YThK1KXWeeJVeztGTz25sL8AAAA.i2p/announce.php"
, "Orion's tracker", "http://gKik1lMlRmuroXVGTZ~7v4Vez3L3ZSpddrGZBrxVriosCQf7iHu6CIk8t15BKsj~P0JJpxrofeuxtm7SCUAJEr0AIYSYw8XOmp35UfcRPQWyb1LsxUkMT4WqxAT3s1ClIICWlBu5An~q-Mm0VFlrYLIPBWlUFnfPR7jZ9uP5ZMSzTKSMYUWao3ejiykr~mtEmyls6g-ZbgKZawa9II4zjOy-hdxHgP-eXMDseFsrym4Gpxvy~3Fv9TuiSqhpgm~UeTo5YBfxn6~TahKtE~~sdCiSydqmKBhxAQ7uT9lda7xt96SS09OYMsIWxLeQUWhns-C~FjJPp1D~IuTrUpAFcVEGVL-BRMmdWbfOJEcWPZ~CBCQSO~VkuN1ebvIOr9JBerFMZSxZtFl8JwcrjCIBxeKPBmfh~xYh16BJm1BBBmN1fp2DKmZ2jBNkAmnUbjQOqWvUcehrykWk5lZbE7bjJMDFH48v3SXwRuDBiHZmSbsTY6zhGY~GkMQHNGxPMMSIAAAA.i2p/bt"
, "eBook Tracker", "http://E71FRom6PZNEqTN2Lr8P-sr23b7HJVC32KoGnVQjaX6zJiXwhJy2HsXob36Qmj81TYFZdewFZa9mSJ533UZgGyQkXo2ahctg82JKYZfDe5uDxAn1E9YPjxZCWJaFJh0S~UwSs~9AZ7UcauSJIoNtpxrtbmRNVFLqnkEDdLZi26TeucfOmiFmIWnVblLniWv3tG1boE9Abd-6j3FmYVrRucYuepAILYt6katmVNOk6sXmno1Eynrp~~MBuFq0Ko6~jsc2E2CRVYXDhGHEMdt-j6JUz5D7S2RIVzDRqQyAZLKJ7OdQDmI31przzmne1vOqqqLC~1xUumZVIvF~yOeJUGNjJ1Vx0J8i2BQIusn1pQJ6UCB~ZtZZLQtEb8EPVCfpeRi2ri1M5CyOuxN0V5ekmPHrYIBNevuTCRC26NP7ZS5VDgx1~NaC3A-CzJAE6f1QXi0wMI9aywNG5KGzOPifcsih8eyGyytvgLtrZtV7ykzYpPCS-rDfITncpn5hliPUAAAA.i2p/pub/bt/announce.php"
, "Gaytorrents Tracker", "http://uxPWHbK1OIj9HxquaXuhMiIvi21iK0~ZiG9d8G0840ZXIg0r6CbiV71xlsqmdnU6wm0T2LySriM0doW2gUigo-5BNkUquHwOjLROiETnB3ZR0Ml4IGa6QBPn1aAq2d9~g1r1nVjLE~pcFnXB~cNNS7kIhX1d6nLgYVZf0C2cZopEow2iWVUggGGnAA9mHjE86zLEnTvAyhbAMTqDQJhEuLa0ZYSORqzJDMkQt90MV4YMjX1ICY6RfUSFmxEqu0yWTrkHsTtRw48l~dz9wpIgc0a0T9C~eeWvmBFTqlJPtQZwntpNeH~jF7nlYzB58olgV2HHFYpVYD87DYNzTnmNWxCJ5AfDorm6AIUCV2qaE7tZtI1h6fbmGpGlPyW~Kw5GXrRfJwNvr6ajwAVi~bPVnrBwDZezHkfW4slOO8FACPR28EQvaTu9nwhAbqESxV2hCTq6vQSGjuxHeOuzBOEvRWkLKOHWTC09t2DbJ94FSqETmZopTB1ukEmaxRWbKSIaAAAA.i2p/announce.php"
, "NickyB Tracker", "http://9On6d3cZ27JjwYCtyJJbowe054d5tFnfMjv4PHsYs-EQn4Y4mk2zRixatvuAyXz2MmRfXG-NAUfhKr0KCxRNZbvHmlckYfT-WBzwwpiMAl0wDFY~Pl8cqXuhfikSG5WrqdPfDNNIBuuznS0dqaczf~OyVaoEOpvuP3qV6wKqbSSLpjOwwAaQPHjlRtNIW8-EtUZp-I0LT45HSoowp~6b7zYmpIyoATvIP~sT0g0MTrczWhbVTUZnEkZeLhOR0Duw1-IRXI2KHPbA24wLO9LdpKKUXed05RTz0QklW5ROgR6TYv7aXFufX8kC0-DaKvQ5JKG~h8lcoHvm1RCzNqVE-2aiZnO2xH08H-iCWoLNJE-Td2kT-Tsc~3QdQcnEUcL5BF-VT~QYRld2--9r0gfGl-yDrJZrlrihHGr5J7ImahelNn9PpkVp6eIyABRmJHf2iicrk3CtjeG1j9OgTSwaNmEpUpn4aN7Kx0zNLdH7z6uTgCGD9Kmh1MFYrsoNlTp4AAAA.i2p/bittorrent/announce.php"
, "Orion's tracker", "http://gKik1lMlRmuroXVGTZ~7v4Vez3L3ZSpddrGZBrxVriosCQf7iHu6CIk8t15BKsj~P0JJpxrofeuxtm7SCUAJEr0AIYSYw8XOmp35UfcRPQWyb1LsxUkMT4WqxAT3s1ClIICWlBu5An~q-Mm0VFlrYLIPBWlUFnfPR7jZ9uP5ZMSzTKSMYUWao3ejiykr~mtEmyls6g-ZbgKZawa9II4zjOy-hdxHgP-eXMDseFsrym4Gpxvy~3Fv9TuiSqhpgm~UeTo5YBfxn6~TahKtE~~sdCiSydqmKBhxAQ7uT9lda7xt96SS09OYMsIWxLeQUWhns-C~FjJPp1D~IuTrUpAFcVEGVL-BRMmdWbfOJEcWPZ~CBCQSO~VkuN1ebvIOr9JBerFMZSxZtFl8JwcrjCIBxeKPBmfh~xYh16BJm1BBBmN1fp2DKmZ2jBNkAmnUbjQOqWvUcehrykWk5lZbE7bjJMDFH48v3SXwRuDBiHZmSbsTY6zhGY~GkMQHNGxPMMSIAAAA.i2p/bt/announce.php"
// , "The freak's tracker", "http://mHKva9x24E5Ygfey2llR1KyQHv5f8hhMpDMwJDg1U-hABpJ2NrQJd6azirdfaR0OKt4jDlmP2o4Qx0H598~AteyD~RJU~xcWYdcOE0dmJ2e9Y8-HY51ie0B1yD9FtIV72ZI-V3TzFDcs6nkdX9b81DwrAwwFzx0EfNvK1GLVWl59Ow85muoRTBA1q8SsZImxdyZ-TApTVlMYIQbdI4iQRwU9OmmtefrCe~ZOf4UBS9-KvNIqUL0XeBSqm0OU1jq-D10Ykg6KfqvuPnBYT1BYHFDQJXW5DdPKwcaQE4MtAdSGmj1epDoaEBUa9btQlFsM2l9Cyn1hzxqNWXELmx8dRlomQLlV4b586dRzW~fLlOPIGC13ntPXogvYvHVyEyptXkv890jC7DZNHyxZd5cyrKC36r9huKvhQAmNABT2Y~pOGwVrb~RpPwT0tBuPZ3lHYhBFYmD8y~AOhhNHKMLzea1rfwTvovBMByDdFps54gMN1mX4MbCGT4w70vIopS9yAAAA.i2p/bytemonsoon/announce.php"
};

View File

@@ -39,7 +39,7 @@ public class Storage
private final StorageListener listener;
private final BitField bitfield; // BitField to represent the pieces
private BitField bitfield; // BitField to represent the pieces
private int needed; // Number of pieces needed
// XXX - Not always set correctly
@@ -48,6 +48,7 @@ public class Storage
/** The default piece size. */
private static int MIN_PIECE_SIZE = 256*1024;
private static int MAX_PIECE_SIZE = 1024*1024;
/** The maximum number of pieces in a torrent. */
private static long MAX_PIECES = 100*1024/20;
@@ -90,7 +91,7 @@ public class Storage
piece_size = MIN_PIECE_SIZE;
pieces = (int) ((total - 1)/piece_size) + 1;
while (pieces > MAX_PIECES)
while (pieces > MAX_PIECES && piece_size < MAX_PIECE_SIZE)
{
piece_size = piece_size*2;
pieces = (int) ((total - 1)/piece_size) +1;
@@ -155,7 +156,7 @@ public class Storage
byte[] piece = new byte[piece_size];
for (int i = 0; i < pieces; i++)
{
int length = getUncheckedPiece(i, piece, 0);
int length = getUncheckedPiece(i, piece);
digest.update(piece, 0, length);
byte[] hash = digest.digest();
for (int j = 0; j < 20; j++)
@@ -183,7 +184,7 @@ public class Storage
byte[] piece = new byte[piece_size];
for (int i = 0; i < pieces; i++)
{
int length = getUncheckedPiece(i, piece, 0);
int length = getUncheckedPiece(i, piece);
digest.update(piece, 0, length);
byte[] hash = digest.digest();
for (int j = 0; j < 20; j++)
@@ -218,6 +219,8 @@ public class Storage
{
File f = (File)it.next();
names[i] = f.getPath();
if (base.isDirectory() && names[i].startsWith(base.getPath()))
names[i] = names[i].substring(base.getPath().length() + 1);
lengths[i] = f.length();
rafs[i] = new RandomAccessFile(f, "r");
i++;
@@ -334,6 +337,49 @@ public class Storage
checkCreateFiles();
}
/**
* Reopen the file descriptors for a restart
* Do existence check but no length check or data reverification
*/
public void reopen(String rootDir) throws IOException
{
File base = new File(rootDir, filterName(metainfo.getName()));
List files = metainfo.getFiles();
if (files == null)
{
// Reopen base as file.
Snark.debug("Reopening file: " + base, Snark.NOTICE);
if (!base.exists())
throw new IOException("Could not reopen file " + base);
if (!base.canWrite()) // hope we can get away with this, if we are only seeding...
rafs[0] = new RandomAccessFile(base, "r");
else
rafs[0] = new RandomAccessFile(base, "rw");
}
else
{
// Reopen base as dir.
Snark.debug("Reopening directory: " + base, Snark.NOTICE);
if (!base.isDirectory())
throw new IOException("Could not reopen directory " + base);
int size = files.size();
for (int i = 0; i < size; i++)
{
File f = createFileFromNames(base, (List)files.get(i));
if (!f.exists())
throw new IOException("Could not reopen file " + f);
if (!f.canWrite()) // see above re: only seeding
rafs[i] = new RandomAccessFile(f, "r");
else
rafs[i] = new RandomAccessFile(f, "rw");
}
}
}
/**
* Removes 'suspicious' characters from the give file name.
*/
@@ -399,7 +445,7 @@ public class Storage
byte[] piece = new byte[metainfo.getPieceLength(0)];
for (int i = 0; i < pieces; i++)
{
int length = getUncheckedPiece(i, piece, 0);
int length = getUncheckedPiece(i, piece);
boolean correctHash = metainfo.checkPiece(i, piece, 0, length);
if (correctHash)
{
@@ -462,16 +508,23 @@ public class Storage
}
/**
* Returns a byte array containing the requested piece or null if
* Returns a byte array containing a portion of the requested piece or null if
* the storage doesn't contain the piece yet.
*/
public byte[] getPiece(int piece) throws IOException
public byte[] getPiece(int piece, int off, int len) throws IOException
{
if (!bitfield.get(piece))
return null;
byte[] bs = new byte[metainfo.getPieceLength(piece)];
getUncheckedPiece(piece, bs, 0);
//Catch a common place for OOMs esp. on 1MB pieces
byte[] bs;
try {
bs = new byte[len];
} catch (OutOfMemoryError oom) {
I2PSnarkUtil.instance().debug("Out of memory, can't honor request for piece " + piece, Snark.WARNING, oom);
return null;
}
getUncheckedPiece(piece, bs, off, len);
return bs;
}
@@ -482,10 +535,11 @@ public class Storage
* matches), otherwise false.
* @exception IOException when some storage related error occurs.
*/
public boolean putPiece(int piece, byte[] bs) throws IOException
public boolean putPiece(int piece, byte[] ba) throws IOException
{
// First check if the piece is correct.
// If we were paranoia we could copy the array first.
// Copy the array first to be paranoid.
byte[] bs = (byte[]) ba.clone();
int length = bs.length;
boolean correctHash = metainfo.checkPiece(piece, bs, 0, length);
if (listener != null)
@@ -504,6 +558,7 @@ public class Storage
needed--;
complete = needed == 0;
}
}
// Early typecast, avoid possibly overflowing a temp integer
@@ -538,23 +593,44 @@ public class Storage
}
if (complete) {
listener.storageCompleted(this);
// listener.storageCompleted(this);
// do we also need to close all of the files and reopen
// them readonly?
// Do a complete check to be sure.
// Temporarily resets the 'needed' variable and 'bitfield', then call
// checkCreateFiles() which will set 'needed' and 'bitfield'
// and also call listener.storageCompleted() if the double-check
// was successful.
// Todo: set a listener variable so the web shows "checking" and don't
// have the user panic when completed amount goes to zero temporarily?
needed = metainfo.getPieces();
bitfield = new BitField(needed);
checkCreateFiles();
if (needed > 0) {
listener.setWantedPieces(this);
Snark.debug("WARNING: Not really done, missing " + needed
+ " pieces", Snark.WARNING);
}
}
return true;
}
private int getUncheckedPiece(int piece, byte[] bs, int off)
private int getUncheckedPiece(int piece, byte[] bs)
throws IOException
{
return getUncheckedPiece(piece, bs, 0, metainfo.getPieceLength(piece));
}
private int getUncheckedPiece(int piece, byte[] bs, int off, int length)
throws IOException
{
// XXX - copy/paste code from putPiece().
// Early typecast, avoid possibly overflowing a temp integer
long start = (long) piece * (long) metainfo.getPieceLength(0);
long start = ((long) piece * (long) metainfo.getPieceLength(0)) + off;
int length = metainfo.getPieceLength(piece);
int i = 0;
long raflen = lengths[i];
while (start > raflen)
@@ -572,7 +648,7 @@ public class Storage
synchronized(rafs[i])
{
rafs[i].seek(start);
rafs[i].readFully(bs, off + read, len);
rafs[i].readFully(bs, read, len);
}
read += len;
if (need - len > 0)

View File

@@ -55,4 +55,11 @@ public interface StorageListener
*
*/
void storageCompleted(Storage storage);
/** Reset the peer's wanted pieces table
* Call after the storage double-check fails
*
* @param peer the peer
*/
void setWantedPieces(Storage storage);
}

View File

@@ -43,6 +43,8 @@ public class TrackerClient extends I2PThread
private static final String STOPPED_EVENT = "stopped";
private final static int SLEEP = 5; // 5 minutes.
private final static int DELAY_MIN = 2000; // 2 secs.
private final static int DELAY_MUL = 1500; // 1.5 secs.
private final MetaInfo meta;
private final PeerCoordinator coordinator;
@@ -110,6 +112,7 @@ public class TrackerClient extends I2PThread
long left = coordinator.getLeft();
boolean completed = (left == 0);
int sleptTime = 0;
try
{
@@ -117,6 +120,7 @@ public class TrackerClient extends I2PThread
boolean started = false;
while (!started)
{
sleptTime = 0;
try
{
// Send start.
@@ -125,18 +129,20 @@ public class TrackerClient extends I2PThread
STARTED_EVENT);
Set peers = info.getPeers();
coordinator.trackerSeenPeers = peers.size();
coordinator.trackerProblems = null;
if (!completed) {
Iterator it = peers.iterator();
while (it.hasNext()) {
Peer cur = (Peer)it.next();
coordinator.addPeer(cur);
int delay = 3000;
int c = ((int)cur.getPeerID().getAddress().calculateHash().toBase64().charAt(0)) % 10;
try { Thread.sleep(delay * c); } catch (InterruptedException ie) {}
int delay = DELAY_MUL;
delay *= ((int)cur.getPeerID().getAddress().calculateHash().toBase64().charAt(0)) % 10;
delay += DELAY_MIN;
sleptTime += delay;
try { Thread.sleep(delay); } catch (InterruptedException ie) {}
}
}
started = true;
coordinator.trackerProblems = null;
}
catch (IOException ioe)
{
@@ -147,7 +153,10 @@ public class TrackerClient extends I2PThread
coordinator.trackerProblems = ioe.getMessage();
}
if (!started && !stop)
if (stop)
break;
if (!started)
{
Snark.debug(" Retrying in one minute...", Snark.DEBUG);
try
@@ -168,8 +177,17 @@ public class TrackerClient extends I2PThread
try
{
// Sleep some minutes...
int delay = SLEEP*60*1000 + r.nextInt(120*1000);
Thread.sleep(delay);
int delay;
if(coordinator.trackerProblems != null && !completed) {
delay = 60*1000;
} else if(completed) {
delay = 3*SLEEP*60*1000 + r.nextInt(120*1000);
} else {
delay = SLEEP*60*1000 + r.nextInt(120*1000);
delay -= sleptTime;
}
if (delay > 0)
Thread.sleep(delay);
}
catch(InterruptedException interrupt)
{
@@ -196,6 +214,7 @@ public class TrackerClient extends I2PThread
event = NO_EVENT;
// Only do a request when necessary.
sleptTime = 0;
if (event == COMPLETED_EVENT
|| coordinator.needPeers()
|| System.currentTimeMillis() > lastRequestTime + interval)
@@ -206,6 +225,7 @@ public class TrackerClient extends I2PThread
uploaded, downloaded, left,
event);
coordinator.trackerProblems = null;
Set peers = info.getPeers();
coordinator.trackerSeenPeers = peers.size();
if ( (left > 0) && (!completed) ) {
@@ -216,10 +236,14 @@ public class TrackerClient extends I2PThread
Iterator it = ordered.iterator();
while (it.hasNext()) {
Peer cur = (Peer)it.next();
coordinator.addPeer(cur);
int delay = 3000;
int c = ((int)cur.getPeerID().getAddress().calculateHash().toBase64().charAt(0)) % 10;
try { Thread.sleep(delay * c); } catch (InterruptedException ie) {}
// only delay if we actually make an attempt to add peer
if(coordinator.addPeer(cur)) {
int delay = DELAY_MUL;
delay *= ((int)cur.getPeerID().getAddress().calculateHash().toBase64().charAt(0)) % 10;
delay += DELAY_MIN;
sleptTime += delay;
try { Thread.sleep(delay); } catch (InterruptedException ie) {}
}
}
}
}
@@ -229,6 +253,7 @@ public class TrackerClient extends I2PThread
Snark.debug
("WARNING: Could not contact tracker at '"
+ announce + "': " + ioe, Snark.WARNING);
coordinator.trackerProblems = ioe.getMessage();
}
}
}

View File

@@ -43,6 +43,7 @@ public class I2PSnarkServlet extends HttpServlet {
req.setCharacterEncoding("UTF-8");
resp.setCharacterEncoding("UTF-8");
resp.setContentType("text/html; charset=UTF-8");
long stats[] = {0,0,0,0};
String nonce = req.getParameter("nonce");
if ( (nonce != null) && (nonce.equals(String.valueOf(_nonce))) )
@@ -55,10 +56,19 @@ public class I2PSnarkServlet extends HttpServlet {
out.write(HEADER);
out.write("<table border=\"0\" width=\"100%\">\n");
out.write("<tr><td width=\"5%\" class=\"snarkTitle\" valign=\"top\" align=\"left\">");
out.write("<tr><td width=\"20%\" class=\"snarkTitle\" valign=\"top\" align=\"left\">");
out.write("I2PSnark<br />\n");
out.write("<a href=\"" + req.getRequestURI() + "\" class=\"snarkRefresh\">Refresh</a>\n");
out.write("</td><td width=\"95%\" class=\"snarkMessages\" valign=\"top\" align=\"left\"><pre>");
out.write("<table border=\"0\" width=\"100%\">\n");
out.write("<tr><td><a href=\"" + req.getRequestURI() + "\" class=\"snarkRefresh\">Refresh</a><br />\n");
out.write("<td><a href=\"http://forum.i2p/viewforum.php?f=21\" class=\"snarkRefresh\">Forum</a><br />\n");
out.write("<tr><td><a href=\"http://de-ebook-archiv.i2p/pub/bt/\" class=\"snarkRefresh\">eBook</a><br />\n");
out.write("<td><a href=\"http://gaytorrents.i2p/\" class=\"snarkRefresh\">GayTorrents</a><br />\n");
out.write("<tr><td><a href=\"http://nickyb.i2p/bittorrent/\" class=\"snarkRefresh\">NickyB</a><br />\n");
out.write("<td><a href=\"http://orion.i2p/bt/\" class=\"snarkRefresh\">Orion</a><br />\n");
out.write("<tr><td><a href=\"http://tracker.postman.i2p/\" class=\"snarkRefresh\">Postman</a><br />\n");
out.write("<td>&nbsp;\n");
out.write("</table>\n");
out.write("</td><td width=\"80%\" class=\"snarkMessages\" valign=\"top\" align=\"left\"><pre>");
List msgs = _manager.getMessages();
for (int i = msgs.size()-1; i >= 0; i--) {
String msg = (String)msgs.get(i);
@@ -66,16 +76,30 @@ public class I2PSnarkServlet extends HttpServlet {
}
out.write("</pre></td></tr></table>\n");
out.write(TABLE_HEADER);
List snarks = getSortedSnarks(req);
String uri = req.getRequestURI();
out.write(TABLE_HEADER);
out.write("<th align=\"left\" valign=\"top\">");
if (I2PSnarkUtil.instance().connected())
out.write("<a href=\"" + uri + "?action=StopAll&nonce=" + _nonce +
"\" title=\"Stop all torrents and the i2p tunnel\">Stop All</a>");
else
out.write("&nbsp;");
out.write("</th></tr></thead>\n");
for (int i = 0; i < snarks.size(); i++) {
Snark snark = (Snark)snarks.get(i);
displaySnark(out, snark, uri, i);
displaySnark(out, snark, uri, i, stats);
}
if (snarks.size() <= 0) {
out.write(TABLE_EMPTY);
} else if (snarks.size() > 1) {
out.write(TABLE_TOTAL);
out.write(" <th align=\"right\" valign=\"top\">" + formatSize(stats[0]) + "</th>\n" +
" <th align=\"right\" valign=\"top\">" + formatSize(stats[1]) + "</th>\n" +
" <th align=\"right\" valign=\"top\">" + formatSize(stats[2]) + "ps</th>\n" +
" <th align=\"right\" valign=\"top\">" + formatSize(stats[3]) + "ps</th>\n" +
" <th>&nbsp;</th></tr>\n" +
"</tfoot>\n");
}
out.write(TABLE_FOOTER);
@@ -240,7 +264,9 @@ public class I2PSnarkServlet extends HttpServlet {
if ( (announceURLOther != null) && (announceURLOther.trim().length() > "http://.i2p/announce".length()) )
announceURL = announceURLOther;
if (baseFile.exists() && baseFile.isFile()) {
if (announceURL == null || announceURL.length() <= 0)
_manager.addMessage("Error creating torrent - you must select a tracker");
else if (baseFile.exists()) {
try {
Storage s = new Storage(baseFile, announceURL, null);
s.create();
@@ -258,12 +284,22 @@ public class I2PSnarkServlet extends HttpServlet {
} catch (IOException ioe) {
_manager.addMessage("Error creating a torrent for " + baseFile.getAbsolutePath() + ": " + ioe.getMessage());
}
} else if (baseFile.exists()) {
_manager.addMessage("I2PSnark doesn't yet support creating multifile torrents");
} else {
_manager.addMessage("Cannot create a torrent for the nonexistant data: " + baseFile.getAbsolutePath());
_manager.addMessage("Cannot create a torrent for the nonexistent data: " + baseFile.getAbsolutePath());
}
}
} else if ("StopAll".equals(action)) {
_manager.addMessage("Stopping all torrents and closing the I2P tunnel");
List snarks = getSortedSnarks(req);
for (int i = 0; i < snarks.size(); i++) {
Snark snark = (Snark)snarks.get(i);
if (!snark.stopped)
_manager.stopTorrent(snark.torrent, false);
}
if (I2PSnarkUtil.instance().connected()) {
I2PSnarkUtil.instance().disconnect();
_manager.addMessage("I2P tunnel closed");
}
}
}
@@ -279,12 +315,16 @@ public class I2PSnarkServlet extends HttpServlet {
}
return rv;
}
private static final int MAX_DISPLAYED_FILENAME_LENGTH = 60;
private void displaySnark(PrintWriter out, Snark snark, String uri, int row) throws IOException {
private static final int MAX_DISPLAYED_ERROR_LENGTH = 30;
private void displaySnark(PrintWriter out, Snark snark, String uri, int row, long stats[]) throws IOException {
String filename = snark.torrent;
File f = new File(filename);
filename = f.getName(); // the torrent may be the canonical name, so lets just grab the local name
int i = filename.lastIndexOf(".torrent");
if (i > 0)
filename = filename.substring(0, i);
if (filename.length() > MAX_DISPLAYED_FILENAME_LENGTH)
filename = filename.substring(0, MAX_DISPLAYED_FILENAME_LENGTH) + "...";
long total = snark.meta.getTotalLength();
@@ -292,32 +332,62 @@ public class I2PSnarkServlet extends HttpServlet {
long remaining = (long) snark.storage.needed() * (long) snark.meta.getPieceLength(0);
if (remaining > total)
remaining = total;
int totalBps = 4096; // should probably grab this from the snark...
long remainingSeconds = remaining / totalBps;
long uploaded = snark.coordinator.getUploaded();
long downBps = 0;
long upBps = 0;
if (snark.coordinator != null) {
downBps = snark.coordinator.getDownloadRate();
upBps = snark.coordinator.getUploadRate();
}
long remainingSeconds;
if (downBps > 0)
remainingSeconds = remaining / downBps;
else
remainingSeconds = -1;
boolean isRunning = !snark.stopped;
long uploaded = 0;
if (snark.coordinator != null) {
uploaded = snark.coordinator.getUploaded();
stats[0] += snark.coordinator.getDownloaded();
}
stats[1] += uploaded;
if (isRunning) {
stats[2] += downBps;
stats[3] += upBps;
}
boolean isValid = snark.meta != null;
boolean singleFile = snark.meta.getFiles() == null;
String err = snark.coordinator.trackerProblems;
int curPeers = snark.coordinator.getPeerCount();
int knownPeers = snark.coordinator.trackerSeenPeers;
String err = null;
int curPeers = 0;
int knownPeers = 0;
if (snark.coordinator != null) {
err = snark.coordinator.trackerProblems;
curPeers = snark.coordinator.getPeerCount();
knownPeers = snark.coordinator.trackerSeenPeers;
}
String statusString = "Unknown";
if (err != null) {
if (isRunning)
statusString = "TrackerErr (" + curPeers + "/" + knownPeers + " peers)";
else
else {
if (err.length() > MAX_DISPLAYED_ERROR_LENGTH)
err = err.substring(0, MAX_DISPLAYED_ERROR_LENGTH) + "...";
statusString = "TrackerErr (" + err + ")";
}
} else if (remaining <= 0) {
if (isRunning)
statusString = "Seeding (" + curPeers + "/" + knownPeers + " peers)";
else
statusString = "Complete";
} else {
if (isRunning)
if (isRunning && curPeers > 0 && downBps > 0)
statusString = "OK (" + curPeers + "/" + knownPeers + " peers)";
else if (isRunning && curPeers > 0)
statusString = "Stalled (" + curPeers + "/" + knownPeers + " peers)";
else if (isRunning)
statusString = "No Peers (0/" + knownPeers + ")";
else
statusString = "Stopped";
}
@@ -336,20 +406,26 @@ public class I2PSnarkServlet extends HttpServlet {
out.write("</a>");
out.write("</td>\n\t");
out.write("<td valign=\"top\" align=\"left\" class=\"snarkTorrentDownloaded " + rowClass + "\">");
if (remaining > 0) {
out.write(formatSize(total-remaining) + "/" + formatSize(total)); // 18MB/3GB
// lets hold off on the ETA until we have rates sorted...
//out.write(" (eta " + DataHelper.formatDuration(remainingSeconds*1000) + ")"); // (eta 6h)
} else {
out.write(formatSize(total)); // 3GB
}
out.write("<td valign=\"top\" align=\"right\" class=\"snarkTorrentETA " + rowClass + "\">");
if(isRunning && remainingSeconds > 0)
out.write(DataHelper.formatDuration(remainingSeconds*1000)); // (eta 6h)
out.write("</td>\n\t");
out.write("<td valign=\"top\" align=\"left\" class=\"snarkTorrentUploaded " + rowClass
out.write("<td valign=\"top\" align=\"right\" class=\"snarkTorrentDownloaded " + rowClass + "\">");
if (remaining > 0)
out.write(formatSize(total-remaining) + "/" + formatSize(total)); // 18MB/3GB
else
out.write(formatSize(total)); // 3GB
out.write("</td>\n\t");
out.write("<td valign=\"top\" align=\"right\" class=\"snarkTorrentUploaded " + rowClass
+ "\">" + formatSize(uploaded) + "</td>\n\t");
//out.write("<td valign=\"top\" align=\"left\" class=\"snarkTorrentRate\">");
//out.write("n/a"); //2KBps/12KBps/4KBps
//out.write("</td>\n\t");
out.write("<td valign=\"top\" align=\"right\" class=\"snarkTorrentRate\">");
if(isRunning && remaining > 0)
out.write(formatSize(downBps) + "ps");
out.write("</td>\n\t");
out.write("<td valign=\"top\" align=\"right\" class=\"snarkTorrentRate\">");
if(isRunning)
out.write(formatSize(upBps) + "ps");
out.write("</td>\n\t");
out.write("<td valign=\"top\" align=\"left\" class=\"snarkTorrentAction " + rowClass + "\">");
if (isRunning) {
out.write("<a href=\"" + uri + "?action=Stop&nonce=" + _nonce
@@ -381,7 +457,8 @@ public class I2PSnarkServlet extends HttpServlet {
// *not* enctype="multipart/form-data", so that the input type=file sends the filename, not the file
out.write("<form action=\"" + uri + "\" method=\"POST\">\n");
out.write("<input type=\"hidden\" name=\"nonce\" value=\"" + _nonce + "\" />\n");
out.write("From URL&nbsp;: <input type=\"text\" name=\"newURL\" size=\"50\" value=\"" + newURL + "\" /> \n");
out.write("<span class=\"snarkConfigTitle\">Add Torrent:</span><br />\n");
out.write("From URL&nbsp;: <input type=\"text\" name=\"newURL\" size=\"80\" value=\"" + newURL + "\" /> \n");
// not supporting from file at the moment, since the file name passed isn't always absolute (so it may not resolve)
//out.write("From file: <input type=\"file\" name=\"newFile\" size=\"50\" value=\"" + newFile + "\" /><br />\n");
out.write("<input type=\"submit\" value=\"Add torrent\" name=\"action\" /><br />\n");
@@ -396,10 +473,11 @@ public class I2PSnarkServlet extends HttpServlet {
if (baseFile == null)
baseFile = "";
out.write("<span class=\"snarkNewTorrent\">\n");
out.write("<span class=\"snarkNewTorrent\"><hr />\n");
// *not* enctype="multipart/form-data", so that the input type=file sends the filename, not the file
out.write("<form action=\"" + uri + "\" method=\"POST\">\n");
out.write("<input type=\"hidden\" name=\"nonce\" value=\"" + _nonce + "\" />\n");
out.write("<span class=\"snarkConfigTitle\">Create Torrent:</span><br />\n");
//out.write("From file: <input type=\"file\" name=\"newFile\" size=\"50\" value=\"" + newFile + "\" /><br />\n");
out.write("Data to seed: " + _manager.getDataDir().getAbsolutePath() + File.separatorChar
+ "<input type=\"text\" name=\"baseFile\" size=\"20\" value=\"" + baseFile
@@ -480,7 +558,7 @@ public class I2PSnarkServlet extends HttpServlet {
String val = (String)options.get(key);
opts.append(key).append('=').append(val).append(' ');
}
out.write("I2CP opts: <input type=\"text\" name=\"i2cpOpts\" size=\"40\" value=\""
out.write("I2CP opts: <input type=\"text\" name=\"i2cpOpts\" size=\"80\" value=\""
+ opts.toString() + "\" /><br />\n");
out.write("<input type=\"submit\" value=\"Save configuration\" name=\"action\" />\n");
out.write("</span>\n");
@@ -540,7 +618,10 @@ public class I2PSnarkServlet extends HttpServlet {
"}\n" +
"th {\n" +
" background-color: #C7D5D5;\n" +
" margin: 0px 0px 0px 0px;\n" +
" padding: 0px 7px 0px 3px;\n" +
"}\n" +
"td {\n" +
" padding: 0px 7px 0px 3px;\n" +
"}\n" +
".snarkTorrentEven {\n" +
" background-color: #E7E7E7;\n" +
@@ -550,8 +631,6 @@ public class I2PSnarkServlet extends HttpServlet {
"}\n" +
".snarkNewTorrent {\n" +
" font-size: 10pt;\n" +
" font-family: monospace;\n" +
" background-color: #ADAE9;\n" +
"}\n" +
".snarkAddInfo {\n" +
" font-size: 10pt;\n" +
@@ -568,19 +647,24 @@ public class I2PSnarkServlet extends HttpServlet {
"<body>\n";
private static final String TABLE_HEADER = "<table border=\"0\" class=\"snarkTorrents\" width=\"100%\">\n" +
private static final String TABLE_HEADER = "<table border=\"0\" class=\"snarkTorrents\" width=\"100%\" cellpadding=\"0 10px\">\n" +
"<thead>\n" +
"<tr><th align=\"left\" valign=\"top\">Status</th>\n" +
" <th align=\"left\" valign=\"top\">Torrent</th>\n" +
" <th align=\"left\" valign=\"top\">Downloaded</th>\n" +
" <th align=\"left\" valign=\"top\">Uploaded</th>\n" +
//" <th align=\"left\" valign=\"top\">Rate</th>\n" +
" <th>&nbsp;</th></tr>\n" +
"</thead>\n";
" <th align=\"right\" valign=\"top\">ETA</th>\n" +
" <th align=\"right\" valign=\"top\">Downloaded</th>\n" +
" <th align=\"right\" valign=\"top\">Uploaded</th>\n" +
" <th align=\"right\" valign=\"top\">Down Rate</th>\n" +
" <th align=\"right\" valign=\"top\">Up Rate</th>\n";
private static final String TABLE_TOTAL = "<tfoot>\n" +
"<tr><th align=\"left\" valign=\"top\">Totals</th>\n" +
" <th>&nbsp;</th>\n" +
" <th>&nbsp;</th>\n";
private static final String TABLE_EMPTY = "<tr class=\"snarkTorrentEven\">" +
"<td class=\"snarkTorrentEven\" align=\"left\"" +
" valign=\"top\" colspan=\"5\">No torrents</td></tr>\n";
" valign=\"top\" colspan=\"8\">No torrents</td></tr>\n";
private static final String TABLE_FOOTER = "</table>\n";

View File

@@ -112,6 +112,16 @@ public class I2PTunnelHTTPClient extends I2PTunnelClientBase implements Runnable
"or naming one of them differently.<P/>")
.getBytes();
private final static byte[] ERR_BAD_PROTOCOL =
("HTTP/1.1 403 Bad Protocol\r\n"+
"Content-Type: text/html; charset=iso-8859-1\r\n"+
"Cache-control: no-cache\r\n"+
"\r\n"+
"<html><body><H1>I2P ERROR: NON-HTTP PROTOCOL</H1>"+
"The request uses a bad protocol. "+
"The I2P HTTP Proxy supports http:// requests ONLY. Other protocols such as https:// and ftp:// are not allowed.<BR>")
.getBytes();
/** used to assign unique IDs to the threads / clients. no logic or functionality */
private static volatile long __clientId = 0;
@@ -483,7 +493,10 @@ public class I2PTunnelHTTPClient extends I2PTunnelClientBase implements Runnable
if (method == null || destination == null) {
l.log("No HTTP method found in the request.");
if (out != null) {
out.write(ERR_REQUEST_DENIED);
if ("http://".equalsIgnoreCase(protocol))
out.write(ERR_REQUEST_DENIED);
else
out.write(ERR_BAD_PROTOCOL);
out.write("<p /><i>Generated on: ".getBytes());
out.write(new Date().toString().getBytes());
out.write("</i></body></html>\n".getBytes());

View File

@@ -5,6 +5,7 @@ import java.net.Socket;
import java.util.ArrayList;
import java.util.List;
import java.util.StringTokenizer;
import java.lang.IndexOutOfBoundsException;
import net.i2p.I2PAppContext;
import net.i2p.client.streaming.I2PSocket;
@@ -43,7 +44,7 @@ public class I2PTunnelIRCClient extends I2PTunnelClientBase implements Runnable
l,
notifyThis,
"IRCHandler " + (++__clientId), tunnel);
StringTokenizer tok = new StringTokenizer(destinations, ",");
dests = new ArrayList(1);
while (tok.hasMoreTokens()) {
@@ -80,9 +81,10 @@ public class I2PTunnelIRCClient extends I2PTunnelClientBase implements Runnable
try {
i2ps = createI2PSocket(dest);
i2ps.setReadTimeout(readTimeout);
Thread in = new I2PThread(new IrcInboundFilter(s,i2ps));
StringBuffer expectedPong = new StringBuffer();
Thread in = new I2PThread(new IrcInboundFilter(s,i2ps, expectedPong));
in.start();
Thread out = new I2PThread(new IrcOutboundFilter(s,i2ps));
Thread out = new I2PThread(new IrcOutboundFilter(s,i2ps, expectedPong));
out.start();
} catch (Exception ex) {
if (_log.shouldLog(Log.ERROR))
@@ -118,10 +120,12 @@ public class I2PTunnelIRCClient extends I2PTunnelClientBase implements Runnable
private Socket local;
private I2PSocket remote;
private StringBuffer expectedPong;
IrcInboundFilter(Socket _local, I2PSocket _remote) {
IrcInboundFilter(Socket _local, I2PSocket _remote, StringBuffer pong) {
local=_local;
remote=_remote;
expectedPong=pong;
}
public void run() {
@@ -146,7 +150,9 @@ public class I2PTunnelIRCClient extends I2PTunnelClientBase implements Runnable
break;
if(inmsg.endsWith("\r"))
inmsg=inmsg.substring(0,inmsg.length()-1);
String outmsg = inboundFilter(inmsg);
if (_log.shouldLog(Log.DEBUG))
_log.debug("in: [" + inmsg + "]");
String outmsg = inboundFilter(inmsg, expectedPong);
if(outmsg!=null)
{
if(!inmsg.equals(outmsg)) {
@@ -188,10 +194,12 @@ public class I2PTunnelIRCClient extends I2PTunnelClientBase implements Runnable
private Socket local;
private I2PSocket remote;
private StringBuffer expectedPong;
IrcOutboundFilter(Socket _local, I2PSocket _remote) {
IrcOutboundFilter(Socket _local, I2PSocket _remote, StringBuffer pong) {
local=_local;
remote=_remote;
expectedPong=pong;
}
public void run() {
@@ -216,7 +224,9 @@ public class I2PTunnelIRCClient extends I2PTunnelClientBase implements Runnable
break;
if(inmsg.endsWith("\r"))
inmsg=inmsg.substring(0,inmsg.length()-1);
String outmsg = outboundFilter(inmsg);
if (_log.shouldLog(Log.DEBUG))
_log.debug("out: [" + inmsg + "]");
String outmsg = outboundFilter(inmsg, expectedPong);
if(outmsg!=null)
{
if(!inmsg.equals(outmsg)) {
@@ -255,7 +265,7 @@ public class I2PTunnelIRCClient extends I2PTunnelClientBase implements Runnable
*
*/
public static String inboundFilter(String s) {
public String inboundFilter(String s, StringBuffer expectedPong) {
String field[]=s.split(" ",4);
String command;
@@ -263,8 +273,8 @@ public class I2PTunnelIRCClient extends I2PTunnelClientBase implements Runnable
final String[] allowedCommands =
{
"NOTICE",
"PING",
"PONG",
//"PING",
//"PONG",
"MODE",
"JOIN",
"NICK",
@@ -272,14 +282,21 @@ public class I2PTunnelIRCClient extends I2PTunnelClientBase implements Runnable
"PART",
"WALLOPS",
"ERROR",
"KICK",
"H", // "hide operator status" (after kicking an op)
"TOPIC"
};
if(field[0].charAt(0)==':')
idx++;
command = field[idx++];
try { command = field[idx++]; }
catch (IndexOutOfBoundsException ioobe) // wtf, server sent borked command?
{
_log.warn("Dropping defective message: index out of bounds while extracting command.");
return null;
}
idx++; //skip victim
// Allow numerical responses
@@ -287,6 +304,21 @@ public class I2PTunnelIRCClient extends I2PTunnelClientBase implements Runnable
new Integer(command);
return s;
} catch(NumberFormatException nfe){}
if ("PING".equals(command))
return "PING 127.0.0.1"; // no way to know what the ircd to i2ptunnel server con is, so localhost works
if ("PONG".equals(command)) {
// Turn the received ":irc.freshcoffee.i2p PONG irc.freshcoffee.i2p :127.0.0.1"
// into ":127.0.0.1 PONG 127.0.0.1 " so that the caller can append the client's extra parameter
// though, does 127.0.0.1 work for irc clients connecting remotely? and for all of them? sure would
// be great if irc clients actually followed the RFCs here, but i guess thats too much to ask.
// If we haven't PINGed them, or the PING we sent isn't something we know how to filter, this
// is blank.
String pong = expectedPong.length() > 0 ? expectedPong.toString() : null;
expectedPong.setLength(0);
return pong;
}
// Allow all allowedCommands
for(int i=0;i<allowedCommands.length;i++) {
@@ -318,14 +350,13 @@ public class I2PTunnelIRCClient extends I2PTunnelClientBase implements Runnable
return null;
}
public static String outboundFilter(String s) {
public String outboundFilter(String s, StringBuffer expectedPong) {
String field[]=s.split(" ",3);
String command;
final String[] allowedCommands =
{
"NOTICE",
"PONG",
"MODE",
"JOIN",
"NICK",
@@ -339,7 +370,8 @@ public class I2PTunnelIRCClient extends I2PTunnelClientBase implements Runnable
"MAP", // seems safe enough, the ircd should protect themselves though
"PART",
"OPER",
"PING",
// "PONG", // replaced with a filtered PING/PONG since some clients send the server IP (thanks aardvax!)
// "PING",
"KICK",
"HELPME",
"RULES",
@@ -355,6 +387,43 @@ public class I2PTunnelIRCClient extends I2PTunnelClientBase implements Runnable
command = field[0].toUpperCase();
if ("PING".equals(command)) {
// Most clients just send a PING and are happy with any old PONG. Others,
// like BitchX, actually expect certain behavior. It sends two different pings:
// "PING :irc.freshcoffee.i2p" and "PING 1234567890 127.0.0.1" (where the IP is the proxy)
// the PONG to the former seems to be "PONG 127.0.0.1", while the PONG to the later is
// ":irc.freshcoffee.i2p PONG irc.freshcoffe.i2p :1234567890".
// We don't want to send them our proxy's IP address, so we need to rewrite the PING
// sent to the server, but when we get a PONG back, use what we expected, rather than
// what they sent.
//
// Yuck.
String rv = null;
expectedPong.setLength(0);
if (field.length == 1) { // PING
rv = "PING";
expectedPong.append("PONG 127.0.0.1");
} else if (field.length == 2) { // PING nonce
rv = "PING " + field[1];
expectedPong.append("PONG ").append(field[1]);
} else if (field.length == 3) { // PING nonce serverLocation
rv = "PING " + field[1];
expectedPong.append("PONG ").append(field[1]);
} else {
if (_log.shouldLog(Log.ERROR))
_log.error("IRC client sent a PING we don't understand, filtering it (\"" + s + "\")");
rv = null;
}
if (_log.shouldLog(Log.WARN))
_log.warn("sending ping " + rv + ", waiting for " + expectedPong + " orig was [" + s + "]");
return rv;
}
if ("PONG".equals(command))
return "PONG 127.0.0.1"; // no way to know what the ircd to i2ptunnel server con is, so localhost works
// Allow all allowedCommands
for(int i=0;i<allowedCommands.length;i++)
{

View File

@@ -190,6 +190,7 @@ public class IndexBean {
}
private String saveChanges() {
// Get current tunnel controller
TunnelController cur = getController(_tunnel);
Properties config = getConfig();
@@ -205,21 +206,28 @@ public class IndexBean {
} else {
cur.setConfig(config, "");
}
if ("ircclient".equals(cur.getType()) ||
"httpclient".equals(cur.getType()) ||
"client".equals(cur.getType())) {
// all clients use the same I2CP session, and as such, use the same
// I2CP options
// Only modify other shared tunnels
// if the current tunnel is shared, and of supported type
if ("true".equalsIgnoreCase(cur.getSharedClient()) &&
("ircclient".equals(cur.getType()) ||
"httpclient".equals(cur.getType()) ||
"client".equals(cur.getType()))) {
// all clients use the same I2CP session, and as such, use the same I2CP options
List controllers = _group.getControllers();
for (int i = 0; i < controllers.size(); i++) {
TunnelController c = (TunnelController)controllers.get(i);
// Current tunnel modified by user, skip
if (c == cur) continue;
//only change when they really are declared of beeing a sharedClient
if (("httpclient".equals(c.getType()) ||
"ircclient".equals(c.getType())||
"client".equals(c.getType())
) && "true".equalsIgnoreCase(c.getSharedClient())) {
// Only modify this non-current tunnel
// if it belongs to a shared destination, and is of supported type
if ("true".equalsIgnoreCase(c.getSharedClient()) &&
("httpclient".equals(c.getType()) ||
"ircclient".equals(c.getType()) ||
"client".equals(c.getType()))) {
Properties cOpt = c.getConfig("");
if (_tunnelQuantity != null) {
cOpt.setProperty("option.inbound.quantity", _tunnelQuantity);

Binary file not shown.

View File

@@ -25,6 +25,7 @@
<pathelement location="../../systray/java/build/systray.jar" />
<pathelement location="../../systray/java/lib/systray4j.jar" />
<pathelement location="../../../installer/lib/wrapper/win32/wrapper.jar" /> <!-- we dont care if we're not on win32 -->
<pathelement location="../../jrobin/jrobin-1.4.0.jar" />
</classpath>
</javac>
</target>
@@ -34,6 +35,12 @@
<attribute name="Class-Path" value="i2p.jar router.jar" />
</manifest>
</jar>
<delete dir="./tmpextract" />
<unjar src="../../jrobin/jrobin-1.4.0.jar" dest="./tmpextract" />
<jar destfile="./build/routerconsole.jar" basedir="./tmpextract" update="true" />
<delete dir="./tmpextract" />
<ant target="war" />
</target>
<target name="war" depends="precompilejsp">
@@ -60,6 +67,7 @@
<pathelement location="../../systray/java/lib/systray4j.jar" />
<pathelement location="../../../installer/lib/wrapper/win32/wrapper.jar" />
<pathelement location="build/routerconsole.jar" />
<pathelement location="build/" />
<pathelement location="../../../router/java/build/router.jar" />
<pathelement location="../../../core/java/build/i2p.jar" />
</classpath>
@@ -86,6 +94,7 @@
<pathelement location="../../systray/java/lib/systray4j.jar" />
<pathelement location="../../../installer/lib/wrapper/win32/wrapper.jar" />
<pathelement location="build/routerconsole.jar" />
<pathelement location="build" />
<pathelement location="../../../router/java/build/router.jar" />
<pathelement location="../../../core/java/build/i2p.jar" />
</classpath>

View File

@@ -30,7 +30,6 @@ import net.i2p.router.web.ConfigServiceHandler.UpdateWrapperManagerAndRekeyTask;
*/
public class ConfigNetHandler extends FormHandler {
private String _hostname;
private boolean _guessRequested;
private boolean _reseedRequested;
private boolean _saveRequested;
private boolean _recheckReachabilityRequested;
@@ -38,6 +37,8 @@ public class ConfigNetHandler extends FormHandler {
private boolean _requireIntroductions;
private boolean _hiddenMode;
private boolean _dynamicKeys;
private String _ntcpHostname;
private String _ntcpPort;
private String _tcpPort;
private String _udpPort;
private String _inboundRate;
@@ -52,11 +53,7 @@ public class ConfigNetHandler extends FormHandler {
private boolean _ratesOnly;
protected void processForm() {
if (_guessRequested) {
guessHostname();
} else if (_reseedRequested) {
reseed();
} else if (_saveRequested || ( (_action != null) && ("Save changes".equals(_action)) )) {
if (_saveRequested || ( (_action != null) && ("Save changes".equals(_action)) )) {
saveChanges();
} else if (_recheckReachabilityRequested) {
recheckReachability();
@@ -65,8 +62,6 @@ public class ConfigNetHandler extends FormHandler {
}
}
public void setGuesshost(String moo) { _guessRequested = true; }
public void setReseed(String moo) { _reseedRequested = true; }
public void setSave(String moo) { _saveRequested = true; }
public void setEnabletimesync(String moo) { _timeSyncEnabled = true; }
public void setRecheckReachability(String moo) { _recheckReachabilityRequested = true; }
@@ -82,6 +77,12 @@ public class ConfigNetHandler extends FormHandler {
public void setTcpPort(String port) {
_tcpPort = (port != null ? port.trim() : null);
}
public void setNtcphost(String host) {
_ntcpHostname = (host != null ? host.trim() : null);
}
public void setNtcpport(String port) {
_ntcpPort = (port != null ? port.trim() : null);
}
public void setUdpPort(String port) {
_udpPort = (port != null ? port.trim() : null);
}
@@ -103,125 +104,9 @@ public class ConfigNetHandler extends FormHandler {
public void setOutboundburstfactor(String factor) {
_outboundBurst = (factor != null ? factor.trim() : null);
}
public void setReseedfrom(String url) {
_reseedFrom = (url != null ? url.trim() : null);
}
public void setSharePercentage(String pct) {
_sharePct = (pct != null ? pct.trim() : null);
}
private static final String IP_PREFIX = "<h1>Your IP is ";
private static final String IP_SUFFIX = " <br></h1>";
private void guessHostname() {
BufferedReader reader = null;
try {
URL url = new URL("http://www.whatismyip.com/");
URLConnection con = url.openConnection();
con.connect();
reader = new BufferedReader(new InputStreamReader(con.getInputStream()));
String line = null;
while ( (line = reader.readLine()) != null) {
if (line.startsWith(IP_PREFIX)) {
int end = line.indexOf(IP_SUFFIX);
if (end == -1) {
addFormError("Unable to guess the host (BAD_SUFFIX)");
return;
}
String ip = line.substring(IP_PREFIX.length(), end);
addFormNotice("Host guess: " + ip);
return;
}
}
addFormError("Unable to guess the host (NO_PREFIX)");
} catch (IOException ioe) {
addFormError("Unable to guess the host (IO_ERROR)");
_context.logManager().getLog(ConfigNetHandler.class).error("Unable to guess the host", ioe);
} finally {
if (reader != null) try { reader.close(); } catch (IOException ioe) {}
}
}
private static final String DEFAULT_SEED_URL = ReseedHandler.DEFAULT_SEED_URL;
/**
* Reseed has been requested, so lets go ahead and do it. Fetch all of
* the routerInfo-*.dat files from the specified URL (or the default) and
* save them into this router's netDb dir.
*
*/
private void reseed() {
String seedURL = DEFAULT_SEED_URL;
if (_reseedFrom != null)
seedURL = _reseedFrom;
try {
URL dir = new URL(seedURL);
String content = new String(readURL(dir));
Set urls = new HashSet();
int cur = 0;
while (true) {
int start = content.indexOf("href=\"routerInfo-", cur);
if (start < 0)
break;
int end = content.indexOf(".dat\">", start);
String name = content.substring(start+"href=\"routerInfo-".length(), end);
urls.add(name);
cur = end + 1;
}
int fetched = 0;
int errors = 0;
for (Iterator iter = urls.iterator(); iter.hasNext(); ) {
try {
fetchSeed(seedURL, (String)iter.next());
fetched++;
} catch (Exception e) {
errors++;
}
}
addFormNotice("Reseeded with " + fetched + " peers (and " + errors + " failures)");
} catch (Throwable t) {
_context.logManager().getLog(ConfigNetHandler.class).error("Error reseeding", t);
addFormError("Error reseeding (RESEED_EXCEPTION)");
}
}
private void fetchSeed(String seedURL, String peer) throws Exception {
URL url = new URL(seedURL + (seedURL.endsWith("/") ? "" : "/") + "routerInfo-" + peer + ".dat");
byte data[] = readURL(url);
writeSeed(peer, data);
}
private byte[] readURL(URL url) throws Exception {
ByteArrayOutputStream baos = new ByteArrayOutputStream(1024);
URLConnection con = url.openConnection();
InputStream in = con.getInputStream();
byte buf[] = new byte[1024];
while (true) {
int read = in.read(buf);
if (read < 0)
break;
baos.write(buf, 0, read);
}
in.close();
return baos.toByteArray();
}
private void writeSeed(String name, byte data[]) throws Exception {
// props taken from KademliaNetworkDatabaseFacade...
String dirName = _context.getProperty("router.networkDatabase.dbDir", "netDb");
File netDbDir = new File(dirName);
if (!netDbDir.exists()) {
boolean ok = netDbDir.mkdirs();
if (ok)
addFormNotice("Network database directory created: " + dirName);
else
addFormNotice("Error creating network database directory: " + dirName);
}
FileOutputStream fos = new FileOutputStream(new File(netDbDir, "routerInfo-" + name + ".dat"));
fos.write(data);
fos.close();
}
private void recheckReachability() {
_context.commSystem().recheckReachability();
@@ -256,6 +141,30 @@ public class ConfigNetHandler extends FormHandler {
restartRequired = true;
}
}
if ( (_ntcpHostname != null) && (_ntcpHostname.length() > 0) && (_ntcpPort != null) && (_ntcpPort.length() > 0) ) {
String oldHost = _context.router().getConfigSetting(ConfigNetHelper.PROP_I2NP_NTCP_HOSTNAME);
String oldPort = _context.router().getConfigSetting(ConfigNetHelper.PROP_I2NP_NTCP_PORT);
if ( (oldHost == null) || (!oldHost.equalsIgnoreCase(_ntcpHostname)) ||
(oldPort == null) || (!oldPort.equalsIgnoreCase(_ntcpPort)) ) {
_context.router().setConfigSetting(ConfigNetHelper.PROP_I2NP_NTCP_HOSTNAME, _ntcpHostname);
_context.router().setConfigSetting(ConfigNetHelper.PROP_I2NP_NTCP_PORT, _ntcpPort);
addFormNotice("Updating inbound TCP settings from " + oldHost + ":" + oldPort
+ " to " + _ntcpHostname + ":" + _ntcpPort);
restartRequired = true;
}
} else {
String oldHost = _context.router().getConfigSetting(ConfigNetHelper.PROP_I2NP_NTCP_HOSTNAME);
String oldPort = _context.router().getConfigSetting(ConfigNetHelper.PROP_I2NP_NTCP_PORT);
if ( (oldHost != null) || (oldPort != null) ) {
_context.router().removeConfigSetting(ConfigNetHelper.PROP_I2NP_NTCP_HOSTNAME);
_context.router().removeConfigSetting(ConfigNetHelper.PROP_I2NP_NTCP_PORT);
addFormNotice("Updating inbound TCP settings from " + oldHost + ":" + oldPort
+ " so that we no longer receive inbound TCP connections");
restartRequired = true;
}
}
if ( (_udpPort != null) && (_udpPort.length() > 0) ) {
String oldPort = _context.router().getConfigSetting(ConfigNetHelper.PROP_I2NP_UDP_PORT);
if ( (oldPort == null) && (_udpPort.equals("8887")) ) {
@@ -284,7 +193,7 @@ public class ConfigNetHandler extends FormHandler {
// If hidden mode value changes, restart is required
if (_hiddenMode && "false".equalsIgnoreCase(_context.getProperty(Router.PROP_HIDDEN, "false"))) {
_context.router().setConfigSetting(Router.PROP_HIDDEN, "true");
_context.router().getRouterInfo().addCapability(RouterInfo.CAPABILITY_HIDDEN);
_context.router().addCapabilities(_context.router().getRouterInfo());
addFormNotice("Gracefully restarting into Hidden Router Mode. Make sure you have no 0-1 length "
+ "<a href=\"configtunnels.jsp\">tunnels!</a>");
hiddenSwitch();

View File

@@ -48,6 +48,18 @@ public class ConfigNetHelper {
}
return "" + port;
}
public final static String PROP_I2NP_NTCP_HOSTNAME = "i2np.ntcp.hostname";
public final static String PROP_I2NP_NTCP_PORT = "i2np.ntcp.port";
public String getNtcphostname() {
String hostname = _context.getProperty(PROP_I2NP_NTCP_HOSTNAME);
if (hostname == null) return "";
return hostname;
}
public String getNtcpport() {
String port = _context.getProperty(PROP_I2NP_NTCP_PORT);
if (port == null) return "";
return port;
}
public String getUdpAddress() {
RouterAddress addr = _context.router().getRouterInfo().getTargetAddress("SSU");

View File

@@ -0,0 +1,122 @@
package net.i2p.router.web;
import java.io.IOException;
import java.io.Writer;
import java.util.*;
import net.i2p.data.DataHelper;
import net.i2p.stat.Rate;
import net.i2p.router.RouterContext;
public class GraphHelper {
private RouterContext _context;
private Writer _out;
private int _periodCount;
private boolean _showEvents;
private int _width;
private int _height;
private int _refreshDelaySeconds;
/**
* Configure this bean to query a particular router context
*
* @param contextId begging few characters of the routerHash, or null to pick
* the first one we come across.
*/
public void setContextId(String contextId) {
try {
_context = ContextHelper.getContext(contextId);
} catch (Throwable t) {
t.printStackTrace();
}
}
public GraphHelper() {
_periodCount = 60; // SummaryListener.PERIODS;
_showEvents = false;
_width = 250;
_height = 100;
_refreshDelaySeconds = 60;
}
public void setOut(Writer out) { _out = out; }
public void setPeriodCount(String str) {
try { _periodCount = Integer.parseInt(str); } catch (NumberFormatException nfe) {}
}
public void setShowEvents(boolean b) { _showEvents = b; }
public void setHeight(String str) {
try { _height = Integer.parseInt(str); } catch (NumberFormatException nfe) {}
}
public void setWidth(String str) {
try { _width = Integer.parseInt(str); } catch (NumberFormatException nfe) {}
}
public void setRefreshDelay(String str) {
try { _refreshDelaySeconds = Integer.parseInt(str); } catch (NumberFormatException nfe) {}
}
public String getImages() {
try {
_out.write("<img src=\"viewstat.jsp?stat=bw.combined"
+ "&amp;periodCount=" + _periodCount
+ "&amp;width=" + _width
+ "&amp;height=" + _height
+ "\" title=\"Combined bandwidth graph\" />\n");
List listeners = StatSummarizer.instance().getListeners();
TreeSet ordered = new TreeSet(new AlphaComparator());
ordered.addAll(listeners);
for (Iterator iter = ordered.iterator(); iter.hasNext(); ) {
SummaryListener lsnr = (SummaryListener)iter.next();
Rate r = lsnr.getRate();
String title = r.getRateStat().getName() + " for " + DataHelper.formatDuration(_periodCount * r.getPeriod());
_out.write("<img src=\"viewstat.jsp?stat=" + r.getRateStat().getName()
+ "&amp;showEvents=" + _showEvents
+ "&amp;period=" + r.getPeriod()
+ "&amp;periodCount=" + _periodCount
+ "&amp;width=" + _width
+ "&amp;height=" + _height
+ "\" title=\"" + title + "\" />\n");
}
if (_refreshDelaySeconds > 0)
_out.write("<meta http-equiv=\"refresh\" content=\"" + _refreshDelaySeconds + "\" />\n");
} catch (IOException ioe) {
ioe.printStackTrace();
}
return "";
}
public String getForm() {
try {
_out.write("<form action=\"graphs.jsp\" method=\"GET\">");
_out.write("Periods: <input size=\"3\" type=\"text\" name=\"periodCount\" value=\"" + _periodCount + "\" /><br />\n");
_out.write("Plot averages: <input type=\"radio\" name=\"showEvents\" value=\"false\" " + (_showEvents ? "" : "checked=\"true\" ") + " /> ");
_out.write("or plot events: <input type=\"radio\" name=\"showEvents\" value=\"true\" "+ (_showEvents ? "checked=\"true\" " : "") + " /><br />\n");
_out.write("Image sizes: width: <input size=\"4\" type=\"text\" name=\"width\" value=\"" + _width
+ "\" /> pixels, height: <input size=\"4\" type=\"text\" name=\"height\" value=\"" + _height
+ "\" /><br />\n");
_out.write("Refresh delay: <select name=\"refreshDelay\"><option value=\"60\">1 minute</option><option value=\"120\">2 minutes</option><option value=\"300\">5 minutes</option><option value=\"600\">10 minutes</option><option value=\"-1\">Never</option></select><br />\n");
_out.write("<input type=\"submit\" value=\"Redraw\" />");
} catch (IOException ioe) {
ioe.printStackTrace();
}
return "";
}
public String getPeerSummary() {
try {
_context.commSystem().renderStatusHTML(_out);
_context.bandwidthLimiter().renderStatusHTML(_out);
} catch (IOException ioe) {
ioe.printStackTrace();
}
return "";
}
}
class AlphaComparator implements Comparator {
public int compare(Object lhs, Object rhs) {
SummaryListener l = (SummaryListener)lhs;
SummaryListener r = (SummaryListener)rhs;
String lName = l.getRate().getRateStat().getName() + "." + l.getRate().getPeriod();
String rName = r.getRate().getRateStat().getName() + "." + r.getRate().getPeriod();
return lName.compareTo(rName);
}
}

View File

@@ -8,6 +8,8 @@ import net.i2p.router.RouterContext;
public class PeerHelper {
private RouterContext _context;
private Writer _out;
private int _sortFlags;
private String _urlBase;
/**
* Configure this bean to query a particular router context
*
@@ -25,10 +27,22 @@ public class PeerHelper {
public PeerHelper() {}
public void setOut(Writer out) { _out = out; }
public void setSort(String flags) {
if (flags != null) {
try {
_sortFlags = Integer.parseInt(flags);
} catch (NumberFormatException nfe) {
_sortFlags = 0;
}
} else {
_sortFlags = 0;
}
}
public void setUrlBase(String base) { _urlBase = base; }
public String getPeerSummary() {
try {
_context.commSystem().renderStatusHTML(_out);
_context.commSystem().renderStatusHTML(_out, _urlBase, _sortFlags);
_context.bandwidthLimiter().renderStatusHTML(_out);
} catch (IOException ioe) {
ioe.printStackTrace();

View File

@@ -22,6 +22,7 @@ import net.i2p.util.I2PThread;
*
*/
public class ReseedHandler {
private static ReseedRunner _reseedRunner = new ReseedRunner();
public void setReseedNonce(String nonce) {
@@ -66,7 +67,7 @@ public class ReseedHandler {
*
*/
private static void reseed(boolean echoStatus) {
String seedURL = System.getProperty("i2p.reseedURL", DEFAULT_SEED_URL);
String seedURL = I2PAppContext.getGlobalContext().getProperty("i2p.reseedURL", DEFAULT_SEED_URL);
if ( (seedURL == null) || (seedURL.trim().length() <= 0) )
seedURL = DEFAULT_SEED_URL;
try {

View File

@@ -25,6 +25,7 @@ public class RouterConsoleRunner {
static {
System.setProperty("org.mortbay.http.Version.paranoid", "true");
System.setProperty("java.awt.headless", "true");
}
public RouterConsoleRunner(String args[]) {
@@ -95,6 +96,10 @@ public class RouterConsoleRunner {
I2PThread t = new I2PThread(fetcher, "NewsFetcher");
t.setDaemon(true);
t.start();
I2PThread st = new I2PThread(new StatSummarizer(), "StatSummarizer");
st.setDaemon(true);
st.start();
}
private void initialize(WebApplicationContext context) {

View File

@@ -0,0 +1,234 @@
package net.i2p.router.web;
import java.io.*;
import java.util.*;
import net.i2p.stat.*;
import net.i2p.router.*;
import net.i2p.util.Log;
import java.awt.Color;
import org.jrobin.graph.RrdGraph;
import org.jrobin.graph.RrdGraphDef;
import org.jrobin.graph.RrdGraphDefTemplate;
import org.jrobin.core.RrdException;
/**
*
*/
public class StatSummarizer implements Runnable {
private RouterContext _context;
private Log _log;
/** list of SummaryListener instances */
private List _listeners;
private static StatSummarizer _instance;
public StatSummarizer() {
_context = (RouterContext)RouterContext.listContexts().get(0); // fuck it, only summarize one per jvm
_log = _context.logManager().getLog(getClass());
_listeners = new ArrayList(16);
_instance = this;
}
public static StatSummarizer instance() { return _instance; }
public void run() {
String specs = "";
while (_context.router().isAlive()) {
specs = adjustDatabases(specs);
try { Thread.sleep(60*1000); } catch (InterruptedException ie) {}
}
}
/** list of SummaryListener instances */
List getListeners() { return _listeners; }
private static final String DEFAULT_DATABASES = "bw.sendRate.60000" +
",bw.recvRate.60000" +
",tunnel.testSuccessTime.60000" +
",udp.outboundActiveCount.60000" +
",udp.receivePacketSize.60000" +
",udp.receivePacketSkew.60000" +
",udp.sendConfirmTime.60000" +
",udp.sendPacketSize.60000" +
",router.activePeers.60000" +
",router.activeSendPeers.60000" +
",tunnel.acceptLoad.60000" +
",tunnel.dropLoadProactive.60000" +
",tunnel.buildExploratorySuccess.60000" +
",tunnel.buildExploratoryReject.60000" +
",tunnel.buildExploratoryExpire.60000" +
",client.sendAckTime.60000" +
",client.dispatchNoACK.60000" +
",ntcp.sendTime.60000" +
",ntcp.transmitTime.60000" +
",ntcp.sendBacklogTime.60000" +
",ntcp.receiveTime.60000" +
",transport.sendMessageFailureLifetime.60000" +
",transport.sendProcessingTime.60000";
private String adjustDatabases(String oldSpecs) {
String spec = _context.getProperty("stat.summaries", DEFAULT_DATABASES);
if ( ( (spec == null) && (oldSpecs == null) ) ||
( (spec != null) && (oldSpecs != null) && (oldSpecs.equals(spec))) )
return oldSpecs;
List old = parseSpecs(oldSpecs);
List newSpecs = parseSpecs(spec);
// remove old ones
for (int i = 0; i < old.size(); i++) {
Rate r = (Rate)old.get(i);
if (!newSpecs.contains(r))
removeDb(r);
}
// add new ones
StringBuffer buf = new StringBuffer();
for (int i = 0; i < newSpecs.size(); i++) {
Rate r = (Rate)newSpecs.get(i);
if (!old.contains(r))
addDb(r);
buf.append(r.getRateStat().getName()).append(".").append(r.getPeriod());
if (i + 1 < newSpecs.size())
buf.append(',');
}
return buf.toString();
}
private void removeDb(Rate r) {
for (int i = 0; i < _listeners.size(); i++) {
SummaryListener lsnr = (SummaryListener)_listeners.get(i);
if (lsnr.getRate().equals(r)) {
_listeners.remove(i);
lsnr.stopListening();
return;
}
}
}
private void addDb(Rate r) {
SummaryListener lsnr = new SummaryListener(r);
_listeners.add(lsnr);
lsnr.startListening();
//System.out.println("Start listening for " + r.getRateStat().getName() + ": " + r.getPeriod());
}
public boolean renderPng(Rate rate, OutputStream out) throws IOException {
return renderPng(rate, out, -1, -1, false, false, false, false, -1, true);
}
public boolean renderPng(Rate rate, OutputStream out, int width, int height, boolean hideLegend, boolean hideGrid, boolean hideTitle, boolean showEvents, int periodCount, boolean showCredit) throws IOException {
for (int i = 0; i < _listeners.size(); i++) {
SummaryListener lsnr = (SummaryListener)_listeners.get(i);
if (lsnr.getRate().equals(rate)) {
lsnr.renderPng(out, width, height, hideLegend, hideGrid, hideTitle, showEvents, periodCount, showCredit);
return true;
}
}
return false;
}
public boolean renderPng(OutputStream out, String templateFilename) throws IOException {
SummaryRenderer.render(_context, out, templateFilename);
return true;
}
public boolean getXML(Rate rate, OutputStream out) throws IOException {
for (int i = 0; i < _listeners.size(); i++) {
SummaryListener lsnr = (SummaryListener)_listeners.get(i);
if (lsnr.getRate().equals(rate)) {
lsnr.getData().exportXml(out);
out.write(("<!-- Rate: " + lsnr.getRate().getRateStat().getName() + " for period " + lsnr.getRate().getPeriod() + " -->\n").getBytes());
out.write(("<!-- Average data soure name: " + lsnr.getName() + " event count data source name: " + lsnr.getEventName() + " -->\n").getBytes());
return true;
}
}
return false;
}
public boolean renderRatePng(OutputStream out, int width, int height, boolean hideLegend, boolean hideGrid, boolean hideTitle, boolean showEvents, int periodCount, boolean showCredit) throws IOException {
long end = _context.clock().now();
if (periodCount <= 0) periodCount = SummaryListener.PERIODS;
if (periodCount > SummaryListener.PERIODS)
periodCount = SummaryListener.PERIODS;
long period = 60*1000;
long start = end - period*periodCount;
long begin = System.currentTimeMillis();
try {
RrdGraphDef def = new RrdGraphDef();
def.setTimePeriod(start/1000, end/1000);
String title = "Bandwidth usage";
if (!hideTitle)
def.setTitle(title);
String sendName = SummaryListener.createName(_context, "bw.sendRate.60000");
String recvName = SummaryListener.createName(_context, "bw.recvRate.60000");
def.datasource(sendName, sendName, sendName, "AVERAGE", "MEMORY");
def.datasource(recvName, recvName, recvName, "AVERAGE", "MEMORY");
def.area(sendName, Color.BLUE, "Outbound bytes/second");
//def.line(sendName, Color.BLUE, "Outbound bytes/second", 3);
//def.line(recvName, Color.RED, "Inbound bytes/second@r", 3);
def.area(recvName, Color.RED, "Inbound bytes/second@r");
if (!hideLegend) {
def.gprint(sendName, "AVERAGE", "outbound average: @2@sbytes/second");
def.gprint(sendName, "MAX", " max: @2@sbytes/second@r");
def.gprint(recvName, "AVERAGE", "inbound average: @2bytes/second@s");
def.gprint(recvName, "MAX", " max: @2@sbytes/second@r");
}
if (!showCredit)
def.setShowSignature(false);
if (hideLegend)
def.setShowLegend(false);
if (hideGrid) {
def.setGridX(false);
def.setGridY(false);
}
//System.out.println("rendering: path=" + path + " dsNames[0]=" + dsNames[0] + " dsNames[1]=" + dsNames[1] + " lsnr.getName=" + _listener.getName());
def.setAntiAliasing(false);
//System.out.println("Rendering: \n" + def.exportXmlTemplate());
//System.out.println("*****************\nData: \n" + _listener.getData().dump());
RrdGraph graph = new RrdGraph(def);
//System.out.println("Graph created");
byte data[] = null;
if ( (width <= 0) || (height <= 0) )
data = graph.getPNGBytes();
else
data = graph.getPNGBytes(width, height);
long timeToPlot = System.currentTimeMillis() - begin;
out.write(data);
//File t = File.createTempFile("jrobinData", ".xml");
//_listener.getData().dumpXml(new FileOutputStream(t));
//System.out.println("plotted: " + (data != null ? data.length : 0) + " bytes in " + timeToPlot
// ); // + ", data written to " + t.getAbsolutePath());
return true;
} catch (RrdException re) {
_log.error("Error rendering", re);
throw new IOException("Error plotting: " + re.getMessage());
} catch (IOException ioe) {
_log.error("Error rendering", ioe);
throw ioe;
}
}
/**
* @param specs statName.period,statName.period,statName.period
* @return list of Rate objects
*/
private List parseSpecs(String specs) {
StringTokenizer tok = new StringTokenizer(specs, ",");
List rv = new ArrayList();
while (tok.hasMoreTokens()) {
String spec = tok.nextToken();
int split = spec.lastIndexOf('.');
if ( (split <= 0) || (split + 1 >= spec.length()) )
continue;
String name = spec.substring(0, split);
String per = spec.substring(split+1);
long period = -1;
try {
period = Long.parseLong(per);
RateStat rs = _context.statManager().getRate(name);
if (rs != null) {
Rate r = rs.getRate(period);
if (r != null)
rv.add(r);
}
} catch (NumberFormatException nfe) {}
}
return rv;
}
}

View File

@@ -107,7 +107,8 @@ public class SummaryHelper {
}
public boolean allowReseed() {
return (_context.netDb().getKnownRouters() < 30);
return (_context.netDb().getKnownRouters() < 30) ||
Boolean.valueOf(_context.getProperty("i2p.alwaysAllowReseed", "false")).booleanValue();
}
public int getAllPeers() { return _context.netDb().getKnownRouters(); }
@@ -213,11 +214,11 @@ public class SummaryHelper {
}
/**
* How fast we have been receiving data over the last minute (pretty printed
* How fast we have been receiving data over the last second (pretty printed
* string with 2 decimal places representing the KBps)
*
*/
public String getInboundMinuteKBps() {
public String getInboundSecondKBps() {
if (_context == null)
return "0.0";
double kbps = _context.bandwidthLimiter().getReceiveBps()/1024d;
@@ -225,11 +226,11 @@ public class SummaryHelper {
return fmt.format(kbps);
}
/**
* How fast we have been sending data over the last minute (pretty printed
* How fast we have been sending data over the last second (pretty printed
* string with 2 decimal places representing the KBps)
*
*/
public String getOutboundMinuteKBps() {
public String getOutboundSecondKBps() {
if (_context == null)
return "0.0";
double kbps = _context.bandwidthLimiter().getSendBps()/1024d;
@@ -493,6 +494,13 @@ public class SummaryHelper {
return _context.throttle().getTunnelLag() + "ms";
}
public String getInboundBacklog() {
if (_context == null)
return "0";
return String.valueOf(_context.tunnelManager().getInboundBuildQueueSize());
}
public boolean updateAvailable() {
return NewsFetcher.getInstance(_context).updateAvailable();
}

View File

@@ -0,0 +1,250 @@
package net.i2p.router.web;
import java.io.*;
import net.i2p.I2PAppContext;
import net.i2p.data.DataHelper;
import net.i2p.stat.Rate;
import net.i2p.stat.RateStat;
import net.i2p.stat.RateSummaryListener;
import net.i2p.util.Log;
import org.jrobin.core.RrdDb;
import org.jrobin.core.RrdDef;
import org.jrobin.core.RrdBackendFactory;
import org.jrobin.core.RrdMemoryBackendFactory;
import org.jrobin.core.Sample;
import java.awt.Color;
import org.jrobin.graph.RrdGraph;
import org.jrobin.graph.RrdGraphDef;
import org.jrobin.graph.RrdGraphDefTemplate;
import org.jrobin.core.RrdException;
class SummaryListener implements RateSummaryListener {
private I2PAppContext _context;
private Log _log;
private Rate _rate;
private String _name;
private String _eventName;
private RrdDb _db;
private Sample _sample;
private RrdMemoryBackendFactory _factory;
private SummaryRenderer _renderer;
static final int PERIODS = 1440;
static {
try {
RrdBackendFactory.setDefaultFactory("MEMORY");
} catch (RrdException re) {
re.printStackTrace();
}
}
public SummaryListener(Rate r) {
_context = I2PAppContext.getGlobalContext();
_rate = r;
_log = _context.logManager().getLog(SummaryListener.class);
}
public void add(double totalValue, long eventCount, double totalEventTime, long period) {
long now = now();
long when = now / 1000;
//System.out.println("add to " + getRate().getRateStat().getName() + " on " + System.currentTimeMillis() + " / " + now + " / " + when);
if (_db != null) {
// add one value to the db (the average value for the period)
try {
_sample.setTime(when);
double val = eventCount > 0 ? (totalValue / (double)eventCount) : 0d;
_sample.setValue(_name, val);
_sample.setValue(_eventName, eventCount);
//_sample.setValue(0, val);
//_sample.setValue(1, eventCount);
_sample.update();
//String names[] = _sample.getDsNames();
//System.out.println("Add " + val + " over " + eventCount + " for " + _name
// + " [" + names[0] + ", " + names[1] + "]");
} catch (IOException ioe) {
_log.error("Error adding", ioe);
} catch (RrdException re) {
_log.error("Error adding", re);
}
}
}
/**
* JRobin can only deal with 20 character data source names, so we need to create a unique,
* munged version from the user/developer-visible name.
*
*/
static String createName(I2PAppContext ctx, String wanted) {
return ctx.sha().calculateHash(DataHelper.getUTF8(wanted)).toBase64().substring(0,20);
}
public Rate getRate() { return _rate; }
public void startListening() {
RateStat rs = _rate.getRateStat();
long period = _rate.getPeriod();
String baseName = rs.getName() + "." + period;
_name = createName(_context, baseName);
_eventName = createName(_context, baseName + ".events");
try {
RrdDef def = new RrdDef(_name, now()/1000, period/1000);
// for info on the heartbeat, xff, steps, etc, see the rrdcreate man page, aka
// http://www.jrobin.org/support/man/rrdcreate.html
long heartbeat = period*10/1000;
def.addDatasource(_name, "GAUGE", heartbeat, Double.NaN, Double.NaN);
def.addDatasource(_eventName, "GAUGE", heartbeat, 0, Double.NaN);
double xff = 0.9;
int steps = 1;
int rows = PERIODS;
def.addArchive("AVERAGE", xff, steps, rows);
_factory = (RrdMemoryBackendFactory)RrdBackendFactory.getDefaultFactory();
_db = new RrdDb(def, _factory);
_sample = _db.createSample();
_renderer = new SummaryRenderer(_context, this);
_rate.setSummaryListener(this);
} catch (RrdException re) {
_log.error("Error starting", re);
} catch (IOException ioe) {
_log.error("Error starting", ioe);
}
}
public void stopListening() {
if (_db == null) return;
try {
_db.close();
} catch (IOException ioe) {
_log.error("Error closing", ioe);
}
_rate.setSummaryListener(null);
_factory.delete(_db.getPath());
_db = null;
}
public void renderPng(OutputStream out, int width, int height, boolean hideLegend, boolean hideGrid, boolean hideTitle, boolean showEvents, int periodCount, boolean showCredit) throws IOException {
_renderer.render(out, width, height, hideLegend, hideGrid, hideTitle, showEvents, periodCount, showCredit);
}
public void renderPng(OutputStream out) throws IOException { _renderer.render(out); }
String getName() { return _name; }
String getEventName() { return _eventName; }
RrdDb getData() { return _db; }
long now() { return _context.clock().now(); }
public boolean equals(Object obj) {
return ((obj instanceof SummaryListener) && ((SummaryListener)obj)._rate.equals(_rate));
}
public int hashCode() { return _rate.hashCode(); }
}
class SummaryRenderer {
private Log _log;
private SummaryListener _listener;
public SummaryRenderer(I2PAppContext ctx, SummaryListener lsnr) {
_log = ctx.logManager().getLog(SummaryRenderer.class);
_listener = lsnr;
}
/**
* Render the stats as determined by the specified JRobin xml config,
* but note that this doesn't work on stock jvms, as it requires
* DOM level 3 load and store support. Perhaps we can bundle that, or
* specify who can get it from where, etc.
*
*/
public static synchronized void render(I2PAppContext ctx, OutputStream out, String filename) throws IOException {
long end = ctx.clock().now();
long start = end - 60*1000*SummaryListener.PERIODS;
long begin = System.currentTimeMillis();
try {
RrdGraphDefTemplate template = new RrdGraphDefTemplate(filename);
RrdGraphDef def = template.getRrdGraphDef();
def.setTimePeriod(start/1000, end/1000); // ignore the periods in the template
RrdGraph graph = new RrdGraph(def);
byte img[] = graph.getPNGBytes();
out.write(img);
} catch (RrdException re) {
//_log.error("Error rendering " + filename, re);
throw new IOException("Error plotting: " + re.getMessage());
} catch (IOException ioe) {
//_log.error("Error rendering " + filename, ioe);
throw ioe;
}
}
public void render(OutputStream out) throws IOException { render(out, -1, -1, false, false, false, false, -1, true); }
public void render(OutputStream out, int width, int height, boolean hideLegend, boolean hideGrid, boolean hideTitle, boolean showEvents, int periodCount, boolean showCredit) throws IOException {
long end = _listener.now();
if (periodCount <= 0) periodCount = SummaryListener.PERIODS;
if (periodCount > SummaryListener.PERIODS)
periodCount = SummaryListener.PERIODS;
long start = end - _listener.getRate().getPeriod()*periodCount;
long begin = System.currentTimeMillis();
try {
RrdGraphDef def = new RrdGraphDef();
def.setTimePeriod(start/1000, end/1000);
String title = _listener.getRate().getRateStat().getName() + " averaged for "
+ DataHelper.formatDuration(_listener.getRate().getPeriod());
if (!hideTitle)
def.setTitle(title);
String path = _listener.getData().getPath();
String dsNames[] = _listener.getData().getDsNames();
String plotName = null;
String descr = null;
if (showEvents) {
// include the average event count on the plot
plotName = dsNames[1];
descr = "Events per period";
} else {
// include the average value
plotName = dsNames[0];
descr = _listener.getRate().getRateStat().getDescription();
}
def.datasource(plotName, path, plotName, "AVERAGE", "MEMORY");
def.area(plotName, Color.BLUE, descr + "@r");
if (!hideLegend) {
def.gprint(plotName, "AVERAGE", "average: @2@s");
def.gprint(plotName, "MAX", " max: @2@s@r");
}
if (!showCredit)
def.setShowSignature(false);
/*
// these four lines set up a graph plotting both values and events on the same chart
// (but with the same coordinates, so the values may look pretty skewed)
def.datasource(dsNames[0], path, dsNames[0], "AVERAGE", "MEMORY");
def.datasource(dsNames[1], path, dsNames[1], "AVERAGE", "MEMORY");
def.area(dsNames[0], Color.BLUE, _listener.getRate().getRateStat().getDescription());
def.line(dsNames[1], Color.RED, "Events per period");
*/
if (hideLegend)
def.setShowLegend(false);
if (hideGrid) {
def.setGridX(false);
def.setGridY(false);
}
//System.out.println("rendering: path=" + path + " dsNames[0]=" + dsNames[0] + " dsNames[1]=" + dsNames[1] + " lsnr.getName=" + _listener.getName());
def.setAntiAliasing(false);
//System.out.println("Rendering: \n" + def.exportXmlTemplate());
//System.out.println("*****************\nData: \n" + _listener.getData().dump());
RrdGraph graph = new RrdGraph(def);
//System.out.println("Graph created");
byte data[] = null;
if ( (width <= 0) || (height <= 0) )
data = graph.getPNGBytes();
else
data = graph.getPNGBytes(width, height);
long timeToPlot = System.currentTimeMillis() - begin;
out.write(data);
//File t = File.createTempFile("jrobinData", ".xml");
//_listener.getData().dumpXml(new FileOutputStream(t));
//System.out.println("plotted: " + (data != null ? data.length : 0) + " bytes in " + timeToPlot
// ); // + ", data written to " + t.getAbsolutePath());
} catch (RrdException re) {
_log.error("Error rendering", re);
throw new IOException("Error plotting: " + re.getMessage());
} catch (IOException ioe) {
_log.error("Error rendering", ioe);
throw ioe;
}
}
}

View File

@@ -43,7 +43,8 @@
A negative rate means a default limit of 16KBytes per second.</i><br />
Bandwidth share percentage:
<jsp:getProperty name="nethelper" property="sharePercentageBox" /><br />
Sharing a higher percentage will improve your anonymity and help the network
Sharing a higher percentage will improve your anonymity and help the network<br />
<input type="submit" name="save" value="Save changes" /> <input type="reset" value="Cancel" /><br />
<hr />
<b>Enable load testing: </b>
<input type="checkbox" name="enableloadtesting" value="true" <jsp:getProperty name="nethelper" property="enableLoadTesting" /> />
@@ -65,6 +66,20 @@
Users behind symmetric NATs, such as OpenBSD's pf, are not currently supported.</p>
<input type="submit" name="recheckReachability" value="Check network reachability..." />
<hr />
<b>Inbound TCP connection configuration:</b><br />
Externally reachable hostname or IP address:
<input name ="ntcphost" type="text" size="16" value="<jsp:getProperty name="nethelper" property="ntcphostname" />" />
(dyndns and the like are fine)<br />
Externally reachable TCP port:
<input name ="ntcpport" type="text" size="6" value="<jsp:getProperty name="nethelper" property="ntcpport" />" /><br />
<p>You do <i>not</i> need to allow inbound TCP connections - outbound connections work with no
configuration. However, if you want to receive inbound TCP connections, you <b>must</b> poke a hole
in your NAT or firewall for unsolicited TCP connections. If you specify the wrong IP address or
hostname, or do not properly configure your NAT or firewall, your network performance will degrade
substantially. When in doubt, leave the hostname and port number blank.</p>
<p><b>Note: changing this setting will terminate all of your connections and effectively
restart your router.</b>
<hr />
<b>Dynamic Router Keys: </b>
<input type="checkbox" name="dynamicKeys" value="true" <jsp:getProperty name="nethelper" property="dynamicKeysChecked" /> /><br />
<p>

View File

@@ -0,0 +1,23 @@
<%@page contentType="text/html"%>
<%@page pageEncoding="UTF-8"%>
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html><head>
<title>I2P Router Console - graphs</title>
<link rel="stylesheet" href="default.css" type="text/css" />
</head><body>
<%@include file="nav.jsp" %>
<%@include file="summary.jsp" %>
<div class="main" id="main">
<jsp:useBean class="net.i2p.router.web.GraphHelper" id="graphHelper" scope="request" />
<jsp:setProperty name="graphHelper" property="*" />
<jsp:setProperty name="graphHelper" property="contextId" value="<%=(String)session.getAttribute("i2p.contextId")%>" />
<jsp:setProperty name="graphHelper" property="out" value="<%=out%>" />
<jsp:getProperty name="graphHelper" property="images" />
<jsp:getProperty name="graphHelper" property="form" />
</div>
</body>
</html>

View File

@@ -33,6 +33,7 @@
<a href="netdb.jsp">NetDB</a> |
<a href="logs.jsp">Logs</a> |
<a href="jobs.jsp">Jobs</a> |
<a href="graphs.jsp">Graphs</a> |
<a href="oldstats.jsp">Stats</a> |
<a href="oldconsole.jsp">Internals</a>
<% } %>

View File

@@ -14,6 +14,8 @@
<jsp:useBean class="net.i2p.router.web.PeerHelper" id="peerHelper" scope="request" />
<jsp:setProperty name="peerHelper" property="contextId" value="<%=(String)session.getAttribute("i2p.contextId")%>" />
<jsp:setProperty name="peerHelper" property="out" value="<%=out%>" />
<jsp:setProperty name="peerHelper" property="urlBase" value="peers.jsp" />
<jsp:setProperty name="peerHelper" property="sort" value="<%=request.getParameter("sort") != null ? request.getParameter("sort") : ""%>" />
<jsp:getProperty name="peerHelper" property="peerSummary" />
</div>

View File

@@ -13,8 +13,8 @@
<b>Ident:</b> <jsp:getProperty name="helper" property="ident" /><br />
<b>Version:</b> <jsp:getProperty name="helper" property="version" /><br />
<b>Uptime:</b> <jsp:getProperty name="helper" property="uptime" /><br />
<b>Now:</b> <jsp:getProperty name="helper" property="time" /><br />
<b>Status:</b> <a href="config.jsp"><jsp:getProperty name="helper" property="reachability" /></a><%
<b>Now:</b> <jsp:getProperty name="helper" property="time" /><!--<br />
<b>Status:</b> <a href="config.jsp"><jsp:getProperty name="helper" property="reachability" /></a>--><%
if (helper.updateAvailable()) {
if ("true".equals(System.getProperty("net.i2p.router.web.UpdateHandler.updateInProgress", "false"))) {
out.print("<br />" + update.getStatus());
@@ -39,7 +39,7 @@
<b>Active:</b> <jsp:getProperty name="helper" property="activePeers" />/<jsp:getProperty name="helper" property="activeProfiles" /><br />
<b>Fast:</b> <jsp:getProperty name="helper" property="fastPeers" /><br />
<b>High capacity:</b> <jsp:getProperty name="helper" property="highCapacityPeers" /><br />
<b>Well integrated:</b> <jsp:getProperty name="helper" property="wellIntegratedPeers" /><br />
<!-- <b>Well integrated:</b> <jsp:getProperty name="helper" property="wellIntegratedPeers" /><br /> -->
<b>Failing:</b> <jsp:getProperty name="helper" property="failingPeers" /><br />
<!-- <b>Shitlisted:</b> <jsp:getProperty name="helper" property="shitlistedPeers" /><br /> -->
<b>Known:</b> <jsp:getProperty name="helper" property="allPeers" /><br /><%
@@ -65,7 +65,7 @@
%><hr />
<u><b><a href="config.jsp" title="Configure the bandwidth limits">Bandwidth in/out</a></b></u><br />
<b>1s:</b> <jsp:getProperty name="helper" property="inboundMinuteKBps" />/<jsp:getProperty name="helper" property="outboundMinuteKBps" />KBps<br />
<b>1s:</b> <jsp:getProperty name="helper" property="inboundSecondKBps" />/<jsp:getProperty name="helper" property="outboundSecondKBps" />KBps<br />
<b>5m:</b> <jsp:getProperty name="helper" property="inboundFiveMinuteKBps" />/<jsp:getProperty name="helper" property="outboundFiveMinuteKBps" />KBps<br />
<b>Total:</b> <jsp:getProperty name="helper" property="inboundLifetimeKBps" />/<jsp:getProperty name="helper" property="outboundLifetimeKBps" />KBps<br />
<b>Used:</b> <jsp:getProperty name="helper" property="inboundTransferred" />/<jsp:getProperty name="helper" property="outboundTransferred" /><br />
@@ -83,6 +83,7 @@
<b>Job lag:</b> <jsp:getProperty name="helper" property="jobLag" /><br />
<b>Message delay:</b> <jsp:getProperty name="helper" property="messageDelay" /><br />
<b>Tunnel lag:</b> <jsp:getProperty name="helper" property="tunnelLag" /><br />
<b>Handle backlog:</b> <jsp:getProperty name="helper" property="inboundBacklog" /><br />
<hr />
</div>

View File

@@ -0,0 +1,63 @@
<%
boolean rendered = false;
String templateFile = request.getParameter("template");
if (templateFile != null) {
java.io.OutputStream cout = response.getOutputStream();
response.setContentType("image/png");
rendered = net.i2p.router.web.StatSummarizer.instance().renderPng(cout, templateFile);
}
net.i2p.stat.Rate rate = null;
String stat = request.getParameter("stat");
String period = request.getParameter("period");
boolean fakeBw = (stat != null && ("bw.combined".equals(stat)));
net.i2p.stat.RateStat rs = net.i2p.I2PAppContext.getGlobalContext().statManager().getRate(stat);
if ( !rendered && ((rs != null) || fakeBw) ) {
long per = -1;
try {
if (fakeBw)
per = 60*1000;
else
per = Long.parseLong(period);
if (!fakeBw)
rate = rs.getRate(per);
if ( (rate != null) || (fakeBw) ) {
java.io.OutputStream cout = response.getOutputStream();
String format = request.getParameter("format");
if ("xml".equals(format)) {
if (!fakeBw) {
response.setContentType("text/xml");
rendered = net.i2p.router.web.StatSummarizer.instance().getXML(rate, cout);
}
} else {
response.setContentType("image/png");
int width = -1;
int height = -1;
int periodCount = -1;
String str = request.getParameter("width");
if (str != null) try { width = Integer.parseInt(str); } catch (NumberFormatException nfe) {}
str = request.getParameter("height");
if (str != null) try { height = Integer.parseInt(str); } catch (NumberFormatException nfe) {}
str = request.getParameter("periodCount");
if (str != null) try { periodCount = Integer.parseInt(str); } catch (NumberFormatException nfe) {}
boolean hideLegend = Boolean.valueOf(""+request.getParameter("hideLegend")).booleanValue();
boolean hideGrid = Boolean.valueOf(""+request.getParameter("hideGrid")).booleanValue();
boolean hideTitle = Boolean.valueOf(""+request.getParameter("hideTitle")).booleanValue();
boolean showEvents = Boolean.valueOf(""+request.getParameter("showEvents")).booleanValue();
boolean showCredit = true;
if (request.getParameter("showCredit") != null)
showCredit = Boolean.valueOf(""+request.getParameter("showCredit")).booleanValue();
if (fakeBw)
rendered = net.i2p.router.web.StatSummarizer.instance().renderRatePng(cout, width, height, hideLegend, hideGrid, hideTitle, showEvents, periodCount, showCredit);
else
rendered = net.i2p.router.web.StatSummarizer.instance().renderPng(rate, cout, width, height, hideLegend, hideGrid, hideTitle, showEvents, periodCount, showCredit);
}
if (rendered)
cout.close();
//System.out.println("Rendered period " + per + " for the stat " + stat + "? " + rendered);
}
} catch (NumberFormatException nfe) {}
}
if (!rendered) {
response.sendError(404, "That stat is not available");
}
%>

View File

@@ -210,6 +210,11 @@ public class Connection {
}
}
if (packet != null) {
if (packet.isFlagSet(Packet.FLAG_RESET)) {
// sendReset takes care to prevent too-frequent RSET transmissions
sendReset();
return;
}
ResendPacketEvent evt = (ResendPacketEvent)packet.getResendEvent();
if (evt != null) {
boolean sent = evt.retransmit(false);
@@ -240,9 +245,12 @@ public class Connection {
_disconnectScheduledOn = _context.clock().now();
SimpleTimer.getInstance().addEvent(new DisconnectEvent(), DISCONNECT_TIMEOUT);
}
long now = _context.clock().now();
if (_resetSentOn + 10*1000 > now) return; // don't send resets too fast
if (_resetReceived) return;
_resetSent = true;
if (_resetSentOn <= 0)
_resetSentOn = _context.clock().now();
_resetSentOn = now;
if ( (_remotePeer == null) || (_sendStreamId <= 0) ) return;
PacketLocal reply = new PacketLocal(_context, _remotePeer);
reply.setFlag(Packet.FLAG_RESET);

View File

@@ -101,7 +101,7 @@ public class ConnectionOptions extends I2PSocketOptionsImpl {
setMaxWindowSize(getInt(opts, PROP_MAX_WINDOW_SIZE, Connection.MAX_WINDOW_SIZE));
setConnectDelay(getInt(opts, PROP_CONNECT_DELAY, -1));
setProfile(getInt(opts, PROP_PROFILE, PROFILE_BULK));
setMaxMessageSize(getInt(opts, PROP_MAX_MESSAGE_SIZE, 4*1024));
setMaxMessageSize(getInt(opts, PROP_MAX_MESSAGE_SIZE, 960)); // 960 fits inside a single tunnel message
setRTT(getInt(opts, PROP_INITIAL_RTT, 10*1000));
setReceiveWindow(getInt(opts, PROP_INITIAL_RECEIVE_WINDOW, 1));
setResendDelay(getInt(opts, PROP_INITIAL_RESEND_DELAY, 1000));

View File

@@ -19,7 +19,7 @@
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
* $Revision: 1.7 $
* $Revision: 1.1 $
*/
package i2p.susi.dns;
@@ -99,7 +99,6 @@ public class AddressbookBean
return ConfigBean.addressbookPrefix + filename;
}
private Object[] entries;
public Object[] getEntries()
{
return entries;
@@ -130,10 +129,59 @@ public class AddressbookBean
public void setSerial(String serial) {
this.serial = serial;
}
/** Load addressbook and apply filter, returning messages about this. */
public String getLoadBookMessages()
{
// Config and addressbook now loaded here, hence not needed in getMessages()
loadConfig();
addressbook = new Properties();
String message = "";
try {
addressbook.load( new FileInputStream( getFileName() ) );
LinkedList list = new LinkedList();
Enumeration e = addressbook.keys();
while( e.hasMoreElements() ) {
String name = (String)e.nextElement();
String destination = addressbook.getProperty( name );
if( filter != null && filter.length() > 0 ) {
if( filter.compareTo( "0-9" ) == 0 ) {
char first = name.charAt(0);
if( first < '0' || first > '9' )
continue;
}
else if( ! name.toLowerCase().startsWith( filter.toLowerCase() ) ) {
continue;
}
}
if( search != null && search.length() > 0 ) {
if( name.indexOf( search ) == -1 ) {
continue;
}
}
list.addLast( new AddressBean( name, destination ) );
}
// Format a message about filtered addressbook size, and the number of displayed entries
message = "Filtered list contains " + list.size() + " entries";
if (list.size() > 300) message += ", displaying the first 300."; else message += ".";
Object array[] = list.toArray();
Arrays.sort( array, sorter );
entries = array;
}
catch (Exception e) {
Debug.debug( e.getClass().getName() + ": " + e.getMessage() );
}
if( message.length() > 0 )
message = "<p>" + message + "</p>";
return message;
}
/** Perform actions, returning messages about this. */
public String getMessages()
{
loadConfig();
// Loading config and addressbook moved into getLoadBookMessages()
String message = "";
if( action != null ) {
@@ -175,42 +223,7 @@ public class AddressbookBean
}
action = null;
addressbook = new Properties();
try {
addressbook.load( new FileInputStream( getFileName() ) );
LinkedList list = new LinkedList();
Enumeration e = addressbook.keys();
while( e.hasMoreElements() ) {
String name = (String)e.nextElement();
String destination = addressbook.getProperty( name );
if( filter != null && filter.length() > 0 ) {
if( filter.compareTo( "0-9" ) == 0 ) {
char first = name.charAt(0);
if( first < '0' || first > '9' )
continue;
}
else if( ! name.toLowerCase().startsWith( filter.toLowerCase() ) ) {
continue;
}
}
if( search != null && search.length() > 0 ) {
if( name.indexOf( search ) == -1 ) {
continue;
}
}
list.addLast( new AddressBean( name, destination ) );
}
Object array[] = list.toArray();
Arrays.sort( array, sorter );
entries = array;
}
catch (Exception e) {
Debug.debug( e.getClass().getName() + ": " + e.getMessage() );
}
if( message.length() > 0 )
message = "<p class=\"messages\">" + message + "</p>";
return message;
@@ -234,6 +247,10 @@ public class AddressbookBean
{
return getBook().compareToIgnoreCase( "router" ) == 0;
}
public boolean isPublished()
{
return getBook().compareToIgnoreCase( "published" ) == 0;
}
public void setFilter(String filter) {
if( filter != null && ( filter.length() == 0 || filter.compareToIgnoreCase( "none" ) == 0 ) ) {
filter = null;

View File

@@ -20,7 +20,7 @@
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
* $Revision: 1.1 $
* $Revision: 1.2 $
*/
%>
<%@ page contentType="text/html"%>
@@ -60,6 +60,8 @@
<div id="messages">${book.messages}</div>
<span>${book.loadBookMessages}</span>
<div id="filter">
<p>Filter: <a href="addressbook.jsp?filter=a">a</a>
<a href="addressbook.jsp?filter=b">b</a>
@@ -115,16 +117,17 @@
<table class="book" cellspacing="0" cellpadding="5">
<tr class="head">
<c:if test="${book.master || book.router}">
<c:if test="${book.master || book.router || book.published}">
<th>&nbsp;</th>
</c:if>
<th>Name</th>
<th>Destination</th>
</tr>
<c:forEach items="${book.entries}" var="addr">
<!-- limit iterator to 300, or "Form too large" may result on submit -->
<c:forEach items="${book.entries}" var="addr" begin="0" end="299">
<tr class="list${book.trClass}">
<c:if test="${book.master || book.router}">
<c:if test="${book.master || book.router || book.published}">
<td class="checkbox"><input type="checkbox" name="checked" value="${addr.name}" alt="Mark for deletion"></td>
</c:if>
<td class="names"><a href="http://${addr.name}/">${addr.name}</a> -
@@ -136,7 +139,7 @@
</table>
</div>
<c:if test="${book.master || book.router}">
<c:if test="${book.master || book.router || book.published}">
<div id="buttons">
<p class="buttons"><input type="image" name="action" value="delete" src="images/delete.png" alt="Delete checked" />
</p>

View File

@@ -19,7 +19,7 @@
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
* $Revision: 1.8 $
* $Revision: 1.1 $
*/
package i2p.susi.webmail.pop3;
@@ -373,8 +373,7 @@ public class POP3MailBox {
}
if (socket != null) {
try {
if (sendCmd1a("")
&& sendCmd1a("USER " + user)
if (sendCmd1a("USER " + user)
&& sendCmd1a("PASS " + pass)
&& sendCmd1a("STAT") ) {

View File

@@ -6,6 +6,7 @@ import java.text.*;
import net.i2p.I2PAppContext;
import net.i2p.data.*;
import net.i2p.syndie.data.*;
import net.i2p.util.FileUtil;
import net.i2p.util.Log;
/**
@@ -27,6 +28,7 @@ import net.i2p.util.Log;
public class Archive {
private I2PAppContext _context;
private Log _log;
private BlogManager _mgr;
private File _rootDir;
private File _cacheDir;
private Map _blogInfo;
@@ -42,9 +44,10 @@ public class Archive {
public boolean accept(File dir, String name) { return name.endsWith(".snd"); }
};
public Archive(I2PAppContext ctx, String rootDir, String cacheDir) {
public Archive(I2PAppContext ctx, String rootDir, String cacheDir, BlogManager mgr) {
_context = ctx;
_log = ctx.logManager().getLog(Archive.class);
_mgr = mgr;
_rootDir = new File(rootDir);
if (!_rootDir.exists())
_rootDir.mkdirs();
@@ -71,6 +74,13 @@ public class Archive {
try {
fi = new FileInputStream(meta);
bi.load(fi);
if (_mgr.isBanned(bi.getKey().calculateHash())) {
fi.close();
fi = null;
_log.error("Deleting banned blog " + bi.getKey().calculateHash().toBase64());
delete(bi.getKey().calculateHash());
continue;
}
if (bi.verify(_context)) {
info.add(bi);
} else {
@@ -119,6 +129,12 @@ public class Archive {
_log.warn("Not storing invalid blog " + info);
return false;
}
if (_mgr.isBanned(info.getKey().calculateHash())) {
_log.error("Not storing banned blog " + info.getKey().calculateHash().toBase64(), new Exception("Stored by"));
return false;
}
boolean isNew = true;
synchronized (_blogInfo) {
BlogInfo old = (BlogInfo)_blogInfo.get(info.getKey().calculateHash());
@@ -211,7 +227,13 @@ public class Archive {
if (!entryDir.exists())
entryDir.mkdirs();
boolean ok = _extractor.extract(entryFile, entryDir, null, info);
boolean ok = true;
try {
ok = _extractor.extract(entryFile, entryDir, null, info);
} catch (IOException ioe) {
ok = false;
_log.error("Error extracting " + entryFile.getPath() + ", deleting it", ioe);
}
if (!ok) {
File files[] = entryDir.listFiles();
for (int i = 0; i < files.length; i++)
@@ -267,8 +289,9 @@ public class Archive {
if (blogKey == null) {
// no key, cache.
File entryDir = getEntryDir(entries[i]);
if (entryDir.exists())
if (entryDir.exists()) {
entry = getCachedEntry(entryDir);
}
if ((entry == null) || !entryDir.exists()) {
if (!extractEntry(entries[i], entryDir, info)) {
_log.error("Entry " + entries[i].getPath() + " is not valid");
@@ -326,6 +349,15 @@ public class Archive {
return rv;
}
public synchronized void delete(Hash blog) {
if (blog == null) return;
File blogDir = new File(_rootDir, blog.toBase64());
boolean deleted = FileUtil.rmdir(blogDir, false);
File cacheDir = new File(_cacheDir, blog.toBase64());
deleted = FileUtil.rmdir(cacheDir, false) && deleted;
_log.info("Deleted blog " + blog.toBase64() + " completely? " + deleted);
}
public boolean storeEntry(EntryContainer container) {
if (container == null) return false;
BlogURI uri = container.getURI();

View File

@@ -74,7 +74,7 @@ public class BlogManager {
_cacheDir.mkdirs();
_userDir.mkdirs();
_tempDir.mkdirs();
_archive = new Archive(ctx, _archiveDir.getAbsolutePath(), _cacheDir.getAbsolutePath());
_archive = new Archive(ctx, _archiveDir.getAbsolutePath(), _cacheDir.getAbsolutePath(), this);
if (regenIndex)
_archive.regenerateIndex();
}
@@ -890,6 +890,8 @@ public class BlogManager {
try {
BlogInfo info = new BlogInfo();
info.load(metadataStream);
if (isBanned(info.getKey().calculateHash()))
return false;
return _archive.storeBlogInfo(info);
} catch (IOException ioe) {
_log.error("Error importing meta", ioe);
@@ -906,6 +908,8 @@ public class BlogManager {
try {
EntryContainer c = new EntryContainer();
c.load(entryStream);
if (isBanned(c.getURI().getKeyHash()))
return false;
return _archive.storeEntry(c);
} catch (IOException ioe) {
_log.error("Error importing entry", ioe);
@@ -1060,4 +1064,49 @@ public class BlogManager {
return true;
return false;
}
public boolean isBanned(Hash blog) {
if ( (blog == null) || (blog.getData() == null) || (blog.getData().length <= 0) ) return false;
String str = blog.toBase64();
String banned = System.getProperty("syndie.bannedBlogs", "");
return (banned.indexOf(str) >= 0);
}
public String[] getBannedBlogs() {
List blogs = new ArrayList();
String str = System.getProperty("syndie.bannedBlogs", "");
StringTokenizer tok = new StringTokenizer(str, ",");
while (tok.hasMoreTokens()) {
String blog = tok.nextToken();
try {
Hash h = new Hash();
h.fromBase64(blog);
blogs.add(blog); // the base64 string, but verified
} catch (DataFormatException dfe) {
// ignored
}
}
String rv[] = new String[blogs.size()];
for (int i = 0; i < blogs.size(); i++)
rv[i] = (String)blogs.get(i);
return rv;
}
/**
* Delete the blog from the archive completely, and ban them from ever being added again
*/
public void purgeAndBan(Hash blog) {
String banned[] = getBannedBlogs();
StringBuffer buf = new StringBuffer();
String str = blog.toBase64();
buf.append(str);
for (int i = 0; banned != null && i < banned.length; i++) {
if (!banned[i].equals(str))
buf.append(",").append(banned[i]);
}
System.setProperty("syndie.bannedBlogs", buf.toString());
writeConfig();
_archive.delete(blog);
_archive.regenerateIndex();
}
}

View File

@@ -59,9 +59,9 @@ public class EntryExtractor {
}
public void extract(EntryContainer entry, File entryDir) throws IOException {
extractEntry(entry, entryDir);
extractHeaders(entry, entryDir);
extractMeta(entry, entryDir);
extractEntry(entry, entryDir);
Attachment attachments[] = entry.getAttachments();
if (attachments != null) {
for (int i = 0; i < attachments.length; i++) {
@@ -97,10 +97,14 @@ public class EntryExtractor {
}
}
private void extractEntry(EntryContainer entry, File entryDir) throws IOException {
Entry e = entry.getEntry();
if (e == null) throw new IOException("Entry is null");
String text = e.getText();
if (text == null) throw new IOException("Entry text is null");
FileOutputStream out = null;
try {
out = new FileOutputStream(new File(entryDir, ENTRY));
out.write(DataHelper.getUTF8(entry.getEntry().getText()));
out.write(DataHelper.getUTF8(text));
} finally {
out.close();
}

View File

@@ -28,10 +28,13 @@ public class UpdaterServlet extends GenericServlet {
super.init(config);
} catch (ServletException exp) {
}
/*
UpdaterThread thread = new UpdaterThread();
thread.setDaemon(true);
thread.start();
System.out.println("INFO: Starting Syndie Updater " + Updater.VERSION);
*/
System.out.println("INFO: Syndie Updater DISABLED. Use the new Syndie from http://syndie.i2p.net/");
}
}

View File

@@ -163,8 +163,9 @@ public class ArchiveIndex {
/** list of unique blogs locally known (set of Hash) */
public Set getUniqueBlogs() {
Set rv = new HashSet();
for (int i = 0; i < _blogs.size(); i++)
for (int i = 0; i < _blogs.size(); i++) {
rv.add(getBlog(i));
}
return rv;
}
public List getReplies(BlogURI uri) {
@@ -367,7 +368,10 @@ public class ArchiveIndex {
return;
tok.nextToken();
String keyStr = tok.nextToken();
Hash keyHash = new Hash(Base64.decode(keyStr));
byte k[] = Base64.decode(keyStr);
if ( (k == null) || (k.length != Hash.HASH_LENGTH) )
return; // ignore bad hashes
Hash keyHash = new Hash(k);
String whenStr = tok.nextToken();
long when = getIndexDate(whenStr);
String tag = tok.nextToken();

View File

@@ -60,7 +60,7 @@ public class EntryContainer {
this();
_entryURI = uri;
if ( (smlData == null) || (smlData.length <= 0) )
_entryData = new Entry(null);
_entryData = new Entry(""); //null);
else
_entryData = new Entry(DataHelper.getUTF8(smlData));
setHeader(HEADER_BLOGKEY, Base64.encode(uri.getKeyHash().getData()));
@@ -277,7 +277,7 @@ public class EntryContainer {
}
if (_entryData == null)
_entryData = new Entry(null);
_entryData = new Entry(""); //null);
_attachments = new Attachment[attachments.size()];

View File

@@ -46,6 +46,7 @@ public class AddressesServlet extends BaseServlet {
public static final String ACTION_DELETE_BLOG = "Delete author";
public static final String ACTION_UPDATE_BLOG = "Update author";
public static final String ACTION_ADD_BLOG = "Add author";
public static final String ACTION_PURGE_AND_BAN_BLOG = "Purge and ban author";
public static final String ACTION_DELETE_ARCHIVE = "Delete archive";
public static final String ACTION_UPDATE_ARCHIVE = "Update archive";
@@ -128,6 +129,8 @@ public class AddressesServlet extends BaseServlet {
if (pn.isMember(FilteredThreadIndex.GROUP_IGNORE)) {
out.write("Ignored? <input type=\"checkbox\" name=\"" + PARAM_IGNORE
+ "\" checked=\"true\" value=\"true\" title=\"If true, their threads are hidden\" /> ");
if (BlogManager.instance().authorizeRemote(user))
out.write("<input type=\"submit\" name=\"" + PARAM_ACTION + "\" value=\"" + ACTION_PURGE_AND_BAN_BLOG + "\" /> ");
} else {
out.write("Ignored? <input type=\"checkbox\" name=\"" + PARAM_IGNORE
+ "\" value=\"true\" title=\"If true, their threads are hidden\" /> ");

View File

@@ -64,13 +64,13 @@ public abstract class BaseServlet extends HttpServlet {
* key=value& of params that need to be tacked onto an http request that updates data, to
* prevent spoofing
*/
protected static String getAuthActionParams() { return PARAM_AUTH_ACTION + '=' + _authNonce + '&'; }
protected static String getAuthActionParams() { return PARAM_AUTH_ACTION + '=' + _authNonce + "&amp;"; }
/**
* key=value& of params that need to be tacked onto an http request that updates data, to
* prevent spoofing
*/
public static void addAuthActionParams(StringBuffer buf) {
buf.append(PARAM_AUTH_ACTION).append('=').append(_authNonce).append('&');
buf.append(PARAM_AUTH_ACTION).append('=').append(_authNonce).append("&amp;");
}
public void service(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
@@ -295,7 +295,7 @@ public abstract class BaseServlet extends HttpServlet {
if (AddressesServlet.ACTION_ADD_TAG.equals(action)) {
String name = req.getParameter(AddressesServlet.PARAM_NAME);
if (!user.getPetNameDB().containsName(name)) {
if ((name != null) && (name.trim().length() > 0) && (!user.getPetNameDB().containsName(name)) ) {
PetName pn = new PetName(name, AddressesServlet.NET_SYNDIE, AddressesServlet.PROTO_TAG, name);
user.getPetNameDB().add(pn);
BlogManager.instance().saveUser(user);
@@ -307,7 +307,7 @@ public abstract class BaseServlet extends HttpServlet {
(AddressesServlet.ACTION_ADD_OTHER.equals(action)) ||
(AddressesServlet.ACTION_ADD_PEER.equals(action)) ) {
PetName pn = buildNewAddress(req);
if ( (pn != null) && (pn.getName() != null) && (pn.getLocation() != null) &&
if ( (pn != null) && (pn.getName() != null) && (pn.getName().trim().length() > 0) && (pn.getLocation() != null) &&
(!user.getPetNameDB().containsName(pn.getName())) ) {
user.getPetNameDB().add(pn);
BlogManager.instance().saveUser(user);
@@ -329,6 +329,34 @@ public abstract class BaseServlet extends HttpServlet {
(AddressesServlet.ACTION_UPDATE_OTHER.equals(action)) ||
(AddressesServlet.ACTION_UPDATE_PEER.equals(action)) ) {
return updateAddress(user, req);
} else if (AddressesServlet.ACTION_PURGE_AND_BAN_BLOG.equals(action)) {
String name = req.getParameter(AddressesServlet.PARAM_NAME);
PetName pn = user.getPetNameDB().getByName(name);
if (pn != null) {
boolean purged = false;
if (BlogManager.instance().authorizeRemote(user)) {
Hash h = null;
BlogURI uri = new BlogURI(pn.getLocation());
if (uri.getKeyHash() != null) {
h = uri.getKeyHash();
}
if (h == null) {
byte b[] = Base64.decode(pn.getLocation());
if ( (b != null) && (b.length == Hash.HASH_LENGTH) )
h = new Hash(b);
}
if (h != null) {
BlogManager.instance().purgeAndBan(h);
purged = true;
}
}
if (purged) // force a new thread index
return true;
else
return false;
} else {
return false;
}
} else if ( (AddressesServlet.ACTION_DELETE_ARCHIVE.equals(action)) ||
(AddressesServlet.ACTION_DELETE_BLOG.equals(action)) ||
(AddressesServlet.ACTION_DELETE_EEPSITE.equals(action)) ||
@@ -716,6 +744,8 @@ public abstract class BaseServlet extends HttpServlet {
for (Iterator iter = names.iterator(); iter.hasNext(); ) {
String name = (String) iter.next();
PetName pn = db.getByName(name);
if (pn == null)
continue;
String proto = pn.getProtocol();
String loc = pn.getLocation();
if (proto != null && loc != null && "syndieblog".equals(proto) && pn.isMember(FilteredThreadIndex.GROUP_FAVORITE)) {
@@ -866,22 +896,22 @@ public abstract class BaseServlet extends HttpServlet {
ThreadNode child = node.getChild(0);
buf.append(ThreadedHTMLRenderer.PARAM_VISIBLE).append('=');
buf.append(child.getEntry().getKeyHash().toBase64()).append('/');
buf.append(child.getEntry().getEntryId()).append('&');
buf.append(child.getEntry().getEntryId()).append("&amp;");
}
if (!empty(viewPost))
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_POST).append('=').append(viewPost).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_POST).append('=').append(viewPost).append("&amp;");
else if (!empty(viewThread))
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_THREAD).append('=').append(viewThread).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_THREAD).append('=').append(viewThread).append("&amp;");
if (!empty(offset))
buf.append(ThreadedHTMLRenderer.PARAM_OFFSET).append('=').append(offset).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_OFFSET).append('=').append(offset).append("&amp;");
if (!empty(tags))
buf.append(ThreadedHTMLRenderer.PARAM_TAGS).append('=').append(tags).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_TAGS).append('=').append(tags).append("&amp;");
if (!empty(author))
buf.append(ThreadedHTMLRenderer.PARAM_AUTHOR).append('=').append(author).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_AUTHOR).append('=').append(author).append("&amp;");
return buf.toString();
}
@@ -901,21 +931,21 @@ public abstract class BaseServlet extends HttpServlet {
// collapse node == let the node be visible
buf.append('?').append(ThreadedHTMLRenderer.PARAM_VISIBLE).append('=');
buf.append(node.getEntry().getKeyHash().toBase64()).append('/');
buf.append(node.getEntry().getEntryId()).append('&');
buf.append(node.getEntry().getEntryId()).append("&amp;");
if (!empty(viewPost))
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_POST).append('=').append(viewPost).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_POST).append('=').append(viewPost).append("&amp;");
else if (!empty(viewThread))
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_THREAD).append('=').append(viewThread).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_THREAD).append('=').append(viewThread).append("&amp;");
if (!empty(offset))
buf.append(ThreadedHTMLRenderer.PARAM_OFFSET).append('=').append(offset).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_OFFSET).append('=').append(offset).append("&amp;");
if (!empty(tags))
buf.append(ThreadedHTMLRenderer.PARAM_TAGS).append('=').append(tags).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_TAGS).append('=').append(tags).append("&amp;");
if (!empty(author))
buf.append(ThreadedHTMLRenderer.PARAM_AUTHOR).append('=').append(author).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_AUTHOR).append('=').append(author).append("&amp;");
return buf.toString();
}
@@ -939,23 +969,23 @@ public abstract class BaseServlet extends HttpServlet {
buf.append(uri);
buf.append('?');
if (!empty(visible))
buf.append(ThreadedHTMLRenderer.PARAM_VISIBLE).append('=').append(visible).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_ADD_TO_GROUP_LOCATION).append('=').append(author.toBase64()).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_ADD_TO_GROUP_NAME).append('=').append(group).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_VISIBLE).append('=').append(visible).append("&amp;");
buf.append(ThreadedHTMLRenderer.PARAM_ADD_TO_GROUP_LOCATION).append('=').append(author.toBase64()).append("&amp;");
buf.append(ThreadedHTMLRenderer.PARAM_ADD_TO_GROUP_NAME).append('=').append(group).append("&amp;");
if (!empty(viewPost))
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_POST).append('=').append(viewPost).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_POST).append('=').append(viewPost).append("&amp;");
else if (!empty(viewThread))
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_THREAD).append('=').append(viewThread).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_THREAD).append('=').append(viewThread).append("&amp;");
if (!empty(offset))
buf.append(ThreadedHTMLRenderer.PARAM_OFFSET).append('=').append(offset).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_OFFSET).append('=').append(offset).append("&amp;");
if (!empty(tags))
buf.append(ThreadedHTMLRenderer.PARAM_TAGS).append('=').append(tags).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_TAGS).append('=').append(tags).append("&amp;");
if (!empty(filteredAuthor))
buf.append(ThreadedHTMLRenderer.PARAM_AUTHOR).append('=').append(filteredAuthor).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_AUTHOR).append('=').append(filteredAuthor).append("&amp;");
addAuthActionParams(buf);
return buf.toString();
@@ -966,23 +996,23 @@ public abstract class BaseServlet extends HttpServlet {
buf.append(uri);
buf.append('?');
if (!empty(visible))
buf.append(ThreadedHTMLRenderer.PARAM_VISIBLE).append('=').append(visible).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_REMOVE_FROM_GROUP_NAME).append('=').append(name).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_REMOVE_FROM_GROUP).append('=').append(group).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_VISIBLE).append('=').append(visible).append("&amp;");
buf.append(ThreadedHTMLRenderer.PARAM_REMOVE_FROM_GROUP_NAME).append('=').append(name).append("&amp;");
buf.append(ThreadedHTMLRenderer.PARAM_REMOVE_FROM_GROUP).append('=').append(group).append("&amp;");
if (!empty(viewPost))
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_POST).append('=').append(viewPost).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_POST).append('=').append(viewPost).append("&amp;");
else if (!empty(viewThread))
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_THREAD).append('=').append(viewThread).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_THREAD).append('=').append(viewThread).append("&amp;");
if (!empty(offset))
buf.append(ThreadedHTMLRenderer.PARAM_OFFSET).append('=').append(offset).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_OFFSET).append('=').append(offset).append("&amp;");
if (!empty(tags))
buf.append(ThreadedHTMLRenderer.PARAM_TAGS).append('=').append(tags).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_TAGS).append('=').append(tags).append("&amp;");
if (!empty(filteredAuthor))
buf.append(ThreadedHTMLRenderer.PARAM_AUTHOR).append('=').append(filteredAuthor).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_AUTHOR).append('=').append(filteredAuthor).append("&amp;");
addAuthActionParams(buf);
return buf.toString();
@@ -1024,24 +1054,23 @@ public abstract class BaseServlet extends HttpServlet {
}
buf.append('?').append(ThreadedHTMLRenderer.PARAM_VISIBLE).append('=');
buf.append(expandTo.getKeyHash().toBase64()).append('/');
buf.append(expandTo.getEntryId()).append('&');
buf.append(expandTo.getEntryId()).append("&amp;");
buf.append(ThreadedHTMLRenderer.PARAM_VIEW_THREAD).append('=');
buf.append(node.getEntry().getKeyHash().toBase64()).append('/');
buf.append(node.getEntry().getEntryId()).append('&');
buf.append(node.getEntry().getEntryId()).append("&amp;");
if (!empty(offset))
buf.append(ThreadedHTMLRenderer.PARAM_OFFSET).append('=').append(offset).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_OFFSET).append('=').append(offset).append("&amp;");
if (!empty(tags))
buf.append(ThreadedHTMLRenderer.PARAM_TAGS).append('=').append(tags).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_TAGS).append('=').append(tags).append("&amp;");
if (!empty(author)) {
buf.append(ThreadedHTMLRenderer.PARAM_AUTHOR).append('=').append(author).append('&');
buf.append(ThreadedHTMLRenderer.PARAM_AUTHOR).append('=').append(author).append("&amp;");
if (authorOnly)
buf.append(ThreadedHTMLRenderer.PARAM_THREAD_AUTHOR).append("=true&");
buf.append(ThreadedHTMLRenderer.PARAM_THREAD_AUTHOR).append("=true&amp;");
}
buf.append("#").append(node.getEntry().toString());
return buf.toString();
}

View File

@@ -62,6 +62,8 @@ public class RemoteArchiveBean {
}
private boolean ignoreBlog(User user, Hash blog) {
if (BlogManager.instance().isBanned(blog))
return true;
PetNameDB db = user.getPetNameDB();
PetName pn = db.getByLocation(blog.toBase64());
return ( (pn!= null) && (pn.isMember("Ignore")) );
@@ -639,6 +641,8 @@ public class RemoteArchiveBean {
int newBlogs = 0;
for (Iterator iter = remoteBlogs.iterator(); iter.hasNext(); ) {
Hash blog = (Hash)iter.next();
if ( (blog == null) || (blog.getData() == null) || (blog.getData().length <= 0) )
continue;
if (ignoreBlog(user, blog))
continue;
if (!localBlogs.contains(blog)) {

View File

@@ -9,6 +9,11 @@
<a href="blogs.jsp">blogs</a></p>
<p><a href="post.jsp">Create</a> a new post of your own</p>
<p><a href="about.html">Learn more</a> about Syndie</p>
<p><b>NOTE:</b> This version of Syndie is being replaced by
<a href="http://syndie.i2p.net">the new Syndie</a>!
The new Syndie is a standalone application under active development.
Please give the new Syndie a try, as it has lots more traffic
than this version. Don't expect anybody to see your posts here.</p>
</td></tr></table>
</div>
</body></html>

View File

@@ -85,4 +85,360 @@ td.s_detail_summDetail {
td.s_summary_summ {
font-size: 0.8em;
background-color: #DDDDFF;
}
<!-- following are doubtful salmon's contributions -->
body {
margin : 0px;
padding : 0px;
width: 99%;
font-family : Arial, sans-serif, Helvetica;
background-color : #FFF;
color : black;
font-size : 100%;
/* we've avoided Tantek Hacks so far,
** but we can't avoid using the non-w3c method of
** box rendering. (and therefore one of mozilla's
** proprietry -moz properties (which hopefully they'll
** drop soon).
*/
-moz-box-sizing : border-box;
box-sizing : border-box;
}
a:link{color:#007}
a:visited{color:#606}
a:hover{color:#720}
a:active{color:#900}
select {
min-width: 1.5em;
}
.overallTable {
border-spacing: 0px;
border-collapse: collapse;
float: left;
}
.topNav {
background-color: #BBB;
}
.topNav_user {
text-align: left;
float: left;
display: inline;
}
.topNav_admin {
text-align: right;
float: right;
margin: 0 5px 0 0;
display: inline;
}
.controlBar {
border-bottom: thick double #CCF;
border-left: medium solid #CCF;
border-right: medium solid #CCF;
background-color: #EEF;
color: inherit;
font-size: small;
clear: left; /* fixes a bug in Opera */
}
.controlBarRight {
text-align: right;
}
.threadEven {
background-color: #FFF;
white-space: nowrap;
}
.threadOdd {
background-color: #FFC;
white-space: nowrap;
}
.threadLeft {
text-align: left;
align: left;
}
.threadNav {
background-color: #EEF;
border: medium solid #CCF;
}
.threadNavRight {
text-align: right;
float: right;
background-color: #EEF;
}
.rightOffset {
float: right;
margin: 0 5px 0 0;
display: inline;
}
.threadInfoLeft {
float: left;
margin: 5px 0px 0 0;
display: inline;
}
.threadInfoRight {
float: right;
margin: 0 5px 0 0;
display: inline;
}
.postMeta {
border-top: 1px solid black;
background-color: #FFB;
}
.postMetaSubject {
text-align: left;
font-size: large;
}
.postMetaLink {
text-align: right;
}
.postDetails {
background-color: #FFC;
}
.postReply {
background-color: #CCF;
}
.postReplyText {
background-color: #CCF;
}
.postReplyOptions {
background-color: #CCF;
}
.syndieBlogTopNav {
padding: 0.5em;
width: 98%;
border: medium solid #CCF;
background-color: #EEF;
font-size: small;
}
.syndieBlogTopNavUser {
text-align: left;
}
.syndieBlogTopNavAdmin {
text-align: right;
}
.syndieBlogHeader {
width: 100%;
font-size: 1.4em;
background-color: #000;
text-align: Left;
float: Left;
}
.syndieBlogHeader a {
color: #FFF;
padding: 4px;
}
.syndieBlogHeader a:hover {
color:#88F;
padding: 4px;
}
.syndieBlogLogo {
float: left;
display: inline;
}
.syndieBlogLinks {
width: 20%;
float: left;
}
.syndieBlogLinkGroup {
font-size: 0.8em;
background-color: #DDD;
border: 1px solid black;
margin: 5px;
padding: 2px;
}
.syndieBlogLinkGroup ul {
list-style: none;
}
.syndieBlogLinkGroup li {
}
.syndieBlogLinkGroupName {
font-weight: bold;
width: 100%;
border-bottom: 1px dashed black;
display: block;
}
.syndieBlogPostInfoGroup {
font-size: 0.8em;
background-color: #FFEA9F;
border: 1px solid black;
margin: 5px;
padding: 2px;
}
.syndieBlogPostInfoGroup ol {
list-style: none;
}
.syndieBlogPostInfoGroup li {
}
.syndieBlogPostInfoGroup li a {
display: block;
}
.syndieBlogPostInfoGroupName {
font-weight: bold;
width: 100%;
border-bottom: 1px dashed black;
display: block;
}
.syndieBlogMeta {
text-align: left;
font-size: 0.8em;
background-color: #DDD;
border: 1px solid black;
margin: 5px;
padding: 2px;
}
.syndieBlogBody {
width: 80%;
float: left;
}
.syndieBlogPost {
border: 1px solid black;
margin-top: 5px;
margin-right: 5px;
}
.syndieBlogPostHeader {
background-color: #FFB;
padding: 2px;
}
.syndieBlogPostSubject {
font-weight: bold;
}
.syndieBlogPostFrom {
text-align: right;
}
.syndieBlogPostSummary {
background-color: #FFF;
padding: 2px;
}
.syndieBlogPostDetails {
background-color: #FFC;
padding: 2px;
}
.syndieBlogNav {
text-align: center;
}
.syndieBlogComments {
border: none;
margin-top: 5px;
margin-left: 0px;
float: left;
}
.syndieBlogComments ul {
list-style: none;
margin-left: 10px;
}
.syndieBlogCommentInfoGroup {
font-size: 0.8em;
margin-right: 5px;
}
.syndieBlogCommentInfoGroup ol {
list-style: none;
}
.syndieBlogCommentInfoGroup li {
}
.syndieBlogCommentInfoGroup li a {
display: block;
}
.syndieBlogCommentInfoGroupName {
font-size: 0.8em;
font-weight: bold;
}
.syndieBlogFavorites {
float: left;
margin: 5px 0px 0 0;
display: inline;
}
.syndieBlogList {
float: right;
margin: 5px 0px 0 0;
display: inline;
}
.b_topnavUser {
text-align: right;
background-color: #CCD;
}
.b_topnavHome {
background-color: #CCD;
color: #000;
width: 50px;
text-align: left;
}
.b_topnav {
background-color: #CCD;
}
.b_content {
}
.s_summary_overall {
}
.s_detail_overall {
}
.s_detail_subject {
font-size: 0.8em;
text-align: left;
background-color: #CCF;
}
.s_detail_quote {
margin-left: 1em;
border: 1px solid #DBDBDB;
background-color: #E0E0E0;
}
.s_detail_italic {
font-style: italic;
}
.s_detail_bold {
font-style: normal;
font-weight: bold;
}
.s_detail_underline {
font-style: normal;
text-decoration: underline;
}
.s_detail_meta {
font-size: 0.8em;
text-align: right;
background-color: #CCF;
}
.s_summary_subject {
font-size: 0.8em;
text-align: left;
background-color: #CCF;
}
.s_summary_meta {
font-size: 0.8em;
text-align: right;
background-color: #CCF;
}
.s_summary_quote {
margin-left: 1em;
border-width: 1px solid #DBDBDB;
background-color: #E0E0E0;
}
.s_summary_italic {
font-style: italic;
}
.s_summary_bold {
font-style: normal;
font-weight: bold;
}
.s_summary_underline {
font-style: normal;
text-decoration: underline;
}
.s_summary_summDetail {
font-size: 0.8em;
}
.s_detail_summDetail {
}
.s_detail_summDetailBlog {
}
.s_detail_summDetailBlogLink {
}
td.s_detail_summDetail {
background-color: #CCF;
}
td.s_summary_summ { width: 80%;
font-size: 0.8em;
background-color: #CCF;
}

View File

@@ -169,7 +169,14 @@ public class SysTray implements SysTrayMenuListener {
_itemOpenConsole.addSysTrayMenuListener(this);
// _sysTrayMenu.addItem(_itemShutdown);
// _sysTrayMenu.addSeparator();
_sysTrayMenu.addItem(_itemSelectBrowser);
// hide it, as there have been reports of b0rked behavior on some JVMs.
// specifically, that on XP & sun1.5.0.1, a user launching i2p w/out the
// service wrapper would create netDb/, peerProfiles/, and other files
// underneath each directory browsed to - as if the router's "." directory
// is changing whenever the itemSelectBrowser's JFileChooser changed
// directories. This has not been reproduced or confirmed yet, but is
// pretty scary, and this function isn't too necessary.
//_sysTrayMenu.addItem(_itemSelectBrowser);
_sysTrayMenu.addItem(_itemOpenConsole);
refreshDisplay();
}

View File

@@ -32,7 +32,7 @@ public class CPUID {
* initialization? this would otherwise use the Log component, but this makes
* it easier for other systems to reuse this class
*/
private static final boolean _doLog = true;
private static final boolean _doLog = System.getProperty("jcpuid.dontLog") == null;
//.matches() is a java 1.4+ addition, using a simplified version for 1.3+
//private static final boolean isX86 = System.getProperty("os.arch").toLowerCase().matches("i?[x0-9]86(_64)?");

View File

@@ -1,7 +1,7 @@
package gnu.crypto.hash;
// ----------------------------------------------------------------------------
// $Id: BaseHash.java,v 1.10 2005/10/06 04:24:14 rsdio Exp $
// $Id: BaseHashStandalone.java,v 1.1 2006/02/26 16:30:59 jrandom Exp $
//
// Copyright (C) 2001, 2002, Free Software Foundation, Inc.
//
@@ -46,9 +46,9 @@ package gnu.crypto.hash;
/**
* <p>A base abstract class to facilitate hash implementations.</p>
*
* @version $Revision: 1.10 $
* @version $Revision: 1.1 $
*/
public abstract class BaseHash implements IMessageDigest {
public abstract class BaseHashStandalone implements IMessageDigestStandalone {
// Constants and variables
// -------------------------------------------------------------------------
@@ -78,7 +78,7 @@ public abstract class BaseHash implements IMessageDigest {
* @param hashSize the block size of the output in bytes.
* @param blockSize the block size of the internal transform.
*/
protected BaseHash(String name, int hashSize, int blockSize) {
protected BaseHashStandalone(String name, int hashSize, int blockSize) {
super();
this.name = name;
@@ -95,7 +95,7 @@ public abstract class BaseHash implements IMessageDigest {
// Instance methods
// -------------------------------------------------------------------------
// IMessageDigest interface implementation ---------------------------------
// IMessageDigestStandalone interface implementation ---------------------------------
public String name() {
return name;

View File

@@ -1,7 +1,7 @@
package gnu.crypto.hash;
// ----------------------------------------------------------------------------
// $Id: IMessageDigest.java,v 1.11 2005/10/06 04:24:14 rsdio Exp $
// $Id: IMessageDigestStandalone.java,v 1.1 2006/02/26 16:30:59 jrandom Exp $
//
// Copyright (C) 2001, 2002, Free Software Foundation, Inc.
//
@@ -49,9 +49,9 @@ package gnu.crypto.hash;
* <p>A hash (or message digest) algorithm produces its output by iterating a
* basic compression function on blocks of data.</p>
*
* @version $Revision: 1.11 $
* @version $Revision: 1.1 $
*/
public interface IMessageDigest extends Cloneable {
public interface IMessageDigestStandalone extends Cloneable {
// Constants
// -------------------------------------------------------------------------

View File

@@ -1,7 +1,7 @@
package gnu.crypto.hash;
// ----------------------------------------------------------------------------
// $Id: Sha256.java,v 1.2 2005/10/06 04:24:14 rsdio Exp $
// $Id: Sha256Standalone.java,v 1.2 2006/03/16 16:45:19 jrandom Exp $
//
// Copyright (C) 2003 Free Software Foundation, Inc.
//
@@ -61,7 +61,7 @@ package gnu.crypto.hash;
*
* @version $Revision: 1.2 $
*/
public class Sha256Standalone extends BaseHash {
public class Sha256Standalone extends BaseHashStandalone {
// Constants and variables
// -------------------------------------------------------------------------
private static final int[] k = {
@@ -127,10 +127,12 @@ public class Sha256Standalone extends BaseHash {
// Class methods
// -------------------------------------------------------------------------
/*
public static final int[] G(int hh0, int hh1, int hh2, int hh3, int hh4,
int hh5, int hh6, int hh7, byte[] in, int offset) {
return sha(hh0, hh1, hh2, hh3, hh4, hh5, hh6, hh7, in, offset);
}
*/
// Instance methods
// -------------------------------------------------------------------------
@@ -141,19 +143,21 @@ public class Sha256Standalone extends BaseHash {
return new Sha256Standalone(this);
}
// Implementation of concrete methods in BaseHash --------------------------
// Implementation of concrete methods in BaseHashStandalone --------------------------
private int transformResult[] = new int[8];
protected void transform(byte[] in, int offset) {
int[] result = sha(h0, h1, h2, h3, h4, h5, h6, h7, in, offset);
//int[] result = sha(h0, h1, h2, h3, h4, h5, h6, h7, in, offset);
sha(h0, h1, h2, h3, h4, h5, h6, h7, in, offset, transformResult);
h0 = result[0];
h1 = result[1];
h2 = result[2];
h3 = result[3];
h4 = result[4];
h5 = result[5];
h6 = result[6];
h7 = result[7];
h0 = transformResult[0];
h1 = transformResult[1];
h2 = transformResult[2];
h3 = transformResult[3];
h4 = transformResult[4];
h5 = transformResult[5];
h6 = transformResult[6];
h7 = transformResult[7];
}
protected byte[] padBuffer() {
@@ -218,8 +222,8 @@ public class Sha256Standalone extends BaseHash {
// SHA specific methods ----------------------------------------------------
private static final synchronized int[]
sha(int hh0, int hh1, int hh2, int hh3, int hh4, int hh5, int hh6, int hh7, byte[] in, int offset) {
private static final synchronized void
sha(int hh0, int hh1, int hh2, int hh3, int hh4, int hh5, int hh6, int hh7, byte[] in, int offset, int out[]) {
int A = hh0;
int B = hh1;
int C = hh2;
@@ -255,8 +259,18 @@ public class Sha256Standalone extends BaseHash {
A = T + T2;
}
/*
return new int[] {
hh0 + A, hh1 + B, hh2 + C, hh3 + D, hh4 + E, hh5 + F, hh6 + G, hh7 + H
};
*/
out[0] = hh0 + A;
out[1] = hh1 + B;
out[2] = hh2 + C;
out[3] = hh3 + D;
out[4] = hh4 + E;
out[5] = hh5 + F;
out[6] = hh6 + G;
out[7] = hh7 + H;
}
}

View File

@@ -0,0 +1,175 @@
package gnu.crypto.prng;
import java.util.*;
/**
* fortuna instance that tries to avoid blocking if at all possible by using separate
* filled buffer segments rather than one buffer (and blocking when that buffer's data
* has been eaten)
*/
public class AsyncFortunaStandalone extends FortunaStandalone implements Runnable {
private static final int BUFFERS = 16;
private static final int BUFSIZE = 256*1024;
private final byte asyncBuffers[][] = new byte[BUFFERS][BUFSIZE];
private final int status[] = new int[BUFFERS];
private int nextBuf = 0;
private static final int STATUS_NEED_FILL = 0;
private static final int STATUS_FILLING = 1;
private static final int STATUS_FILLED = 2;
private static final int STATUS_LIVE = 3;
public AsyncFortunaStandalone() {
super();
for (int i = 0; i < BUFFERS; i++)
status[i] = STATUS_NEED_FILL;
}
public void startup() {
Thread refillThread = new Thread(this, "PRNG");
refillThread.setDaemon(true);
refillThread.setPriority(Thread.MIN_PRIORITY+1);
refillThread.start();
}
/** the seed is only propogated once the prng is started with startup() */
public void seed(byte val[]) {
Map props = new HashMap(1);
props.put(SEED, (Object)val);
init(props);
//fillBlock();
}
protected void allocBuffer() {}
/**
* make the next available filled buffer current, scheduling any unfilled
* buffers for refill, and blocking until at least one buffer is ready
*/
protected void rotateBuffer() {
synchronized (asyncBuffers) {
// wait until we get some filled
long before = System.currentTimeMillis();
long waited = 0;
while (status[nextBuf] != STATUS_FILLED) {
//System.out.println(Thread.currentThread().getName() + ": Next PRNG buffer "
// + nextBuf + " isn't ready (" + status[nextBuf] + ")");
//new Exception("source").printStackTrace();
asyncBuffers.notifyAll();
try {
asyncBuffers.wait();
} catch (InterruptedException ie) {}
waited = System.currentTimeMillis()-before;
}
if (waited > 10*1000)
System.out.println(Thread.currentThread().getName() + ": Took " + waited
+ "ms for a full PRNG buffer to be found");
//System.out.println(Thread.currentThread().getName() + ": Switching to prng buffer " + nextBuf);
buffer = asyncBuffers[nextBuf];
status[nextBuf] = STATUS_LIVE;
int prev=nextBuf-1;
if (prev<0)
prev = BUFFERS-1;
if (status[prev] == STATUS_LIVE)
status[prev] = STATUS_NEED_FILL;
nextBuf++;
if (nextBuf >= BUFFERS)
nextBuf = 0;
asyncBuffers.notify();
}
}
public void run() {
while (true) {
int toFill = -1;
try {
synchronized (asyncBuffers) {
for (int i = 0; i < BUFFERS; i++) {
if (status[i] == STATUS_NEED_FILL) {
status[i] = STATUS_FILLING;
toFill = i;
break;
}
}
if (toFill == -1) {
//System.out.println(Thread.currentThread().getName() + ": All pending buffers full");
asyncBuffers.wait();
}
}
} catch (InterruptedException ie) {}
if (toFill != -1) {
//System.out.println(Thread.currentThread().getName() + ": Filling prng buffer " + toFill);
long before = System.currentTimeMillis();
doFill(asyncBuffers[toFill]);
long after = System.currentTimeMillis();
synchronized (asyncBuffers) {
status[toFill] = STATUS_FILLED;
//System.out.println(Thread.currentThread().getName() + ": Prng buffer " + toFill + " filled after " + (after-before));
asyncBuffers.notifyAll();
}
Thread.yield();
long waitTime = (after-before)*5;
if (waitTime <= 0) // somehow postman saw waitTime show up as negative
waitTime = 50;
try { Thread.sleep(waitTime); } catch (InterruptedException ie) {}
}
}
}
public void fillBlock()
{
rotateBuffer();
}
private void doFill(byte buf[]) {
long start = System.currentTimeMillis();
if (pool0Count >= MIN_POOL_SIZE
&& System.currentTimeMillis() - lastReseed > 100)
{
reseedCount++;
//byte[] seed = new byte[0];
for (int i = 0; i < NUM_POOLS; i++)
{
if (reseedCount % (1 << i) == 0) {
generator.addRandomBytes(pools[i].digest());
}
}
lastReseed = System.currentTimeMillis();
}
generator.nextBytes(buf);
long now = System.currentTimeMillis();
long diff = now-lastRefill;
lastRefill = now;
long refillTime = now-start;
//System.out.println("Refilling " + (++refillCount) + " after " + diff + " for the PRNG took " + refillTime);
}
public static void main(String args[]) {
try {
AsyncFortunaStandalone rand = new AsyncFortunaStandalone();
byte seed[] = new byte[1024];
rand.seed(seed);
System.out.println("Before starting prng");
rand.startup();
System.out.println("Starting prng, waiting 1 minute");
try { Thread.sleep(60*1000); } catch (InterruptedException ie) {}
System.out.println("PRNG started, beginning test");
long before = System.currentTimeMillis();
byte buf[] = new byte[1024];
java.io.ByteArrayOutputStream baos = new java.io.ByteArrayOutputStream();
java.util.zip.GZIPOutputStream gos = new java.util.zip.GZIPOutputStream(baos);
for (int i = 0; i < 1024; i++) {
rand.nextBytes(buf);
gos.write(buf);
}
long after = System.currentTimeMillis();
gos.finish();
byte compressed[] = baos.toByteArray();
System.out.println("Compressed size of 1MB: " + compressed.length + " took " + (after-before));
} catch (Exception e) { e.printStackTrace(); }
try { Thread.sleep(5*60*1000); } catch (InterruptedException ie) {}
}
}

View File

@@ -49,7 +49,7 @@ import java.util.Map;
* Modified slightly by jrandom for I2P (removing unneeded exceptions)
* @version $Revision: 1.1 $
*/
public abstract class BasePRNGStandalone implements IRandom {
public abstract class BasePRNGStandalone implements IRandomStandalone {
// Constants and variables
// -------------------------------------------------------------------------
@@ -88,7 +88,7 @@ public abstract class BasePRNGStandalone implements IRandom {
// Instance methods
// -------------------------------------------------------------------------
// IRandom interface implementation ----------------------------------------
// IRandomStandalone interface implementation ----------------------------------------
public String name() {
return name;

View File

@@ -97,20 +97,22 @@ import net.i2p.crypto.CryptixAESKeyCache;
* gnu-crypto implementation, which has been imported into GNU/classpath
*
*/
public class FortunaStandalone extends BasePRNGStandalone implements Serializable, RandomEventListener
public class FortunaStandalone extends BasePRNGStandalone implements Serializable, RandomEventListenerStandalone
{
private static final long serialVersionUID = 0xFACADE;
private static final int SEED_FILE_SIZE = 64;
private static final int NUM_POOLS = 32;
private static final int MIN_POOL_SIZE = 64;
private final Generator generator;
private final Sha256Standalone[] pools;
private long lastReseed;
private int pool;
private int pool0Count;
private int reseedCount;
static final int NUM_POOLS = 32;
static final int MIN_POOL_SIZE = 64;
final Generator generator;
final Sha256Standalone[] pools;
long lastReseed;
int pool;
int pool0Count;
int reseedCount;
static long refillCount = 0;
static long lastRefill = System.currentTimeMillis();
public static final String SEED = "gnu.crypto.prng.fortuna.seed";
@@ -124,6 +126,9 @@ public class FortunaStandalone extends BasePRNGStandalone implements Serializabl
lastReseed = 0;
pool = 0;
pool0Count = 0;
allocBuffer();
}
protected void allocBuffer() {
buffer = new byte[4*1024*1024]; //256]; // larger buffer to reduce churn
}
@@ -145,6 +150,7 @@ public class FortunaStandalone extends BasePRNGStandalone implements Serializabl
public void fillBlock()
{
long start = System.currentTimeMillis();
if (pool0Count >= MIN_POOL_SIZE
&& System.currentTimeMillis() - lastReseed > 100)
{
@@ -159,6 +165,11 @@ public class FortunaStandalone extends BasePRNGStandalone implements Serializabl
lastReseed = System.currentTimeMillis();
}
generator.nextBytes(buffer);
long now = System.currentTimeMillis();
long diff = now-lastRefill;
lastRefill = now;
long refillTime = now-start;
System.out.println("Refilling " + (++refillCount) + " after " + diff + " for the PRNG took " + refillTime);
}
public void addRandomByte(byte b)
@@ -177,7 +188,7 @@ public class FortunaStandalone extends BasePRNGStandalone implements Serializabl
pool = (pool + 1) % NUM_POOLS;
}
public void addRandomEvent(RandomEvent event)
public void addRandomEvent(RandomEventStandalone event)
{
if (event.getPoolNumber() < 0 || event.getPoolNumber() >= pools.length)
throw new IllegalArgumentException("pool number out of range: "
@@ -338,6 +349,34 @@ public class FortunaStandalone extends BasePRNGStandalone implements Serializabl
}
public static void main(String args[]) {
byte in[] = new byte[16];
byte out[] = new byte[16];
byte key[] = new byte[32];
try {
CryptixAESKeyCache.KeyCacheEntry buf = CryptixAESKeyCache.createNew();
Object cryptixKey = CryptixRijndael_Algorithm.makeKey(key, 16, buf);
long beforeAll = System.currentTimeMillis();
for (int i = 0; i < 256; i++) {
//long before =System.currentTimeMillis();
for (int j = 0; j < 1024; j++)
CryptixRijndael_Algorithm.blockEncrypt(in, out, 0, 0, cryptixKey);
//long after = System.currentTimeMillis();
//System.out.println("encrypting 16KB took " + (after-before));
}
long after = System.currentTimeMillis();
System.out.println("encrypting 4MB took " + (after-beforeAll));
} catch (Exception e) { e.printStackTrace(); }
try {
CryptixAESKeyCache.KeyCacheEntry buf = CryptixAESKeyCache.createNew();
Object cryptixKey = CryptixRijndael_Algorithm.makeKey(key, 16, buf);
byte data[] = new byte[4*1024*1024];
long beforeAll = System.currentTimeMillis();
//CryptixRijndael_Algorithm.ecbBulkEncrypt(data, data, cryptixKey);
long after = System.currentTimeMillis();
System.out.println("encrypting 4MB took " + (after-beforeAll));
} catch (Exception e) { e.printStackTrace(); }
/*
FortunaStandalone f = new FortunaStandalone();
java.util.HashMap props = new java.util.HashMap();
byte initSeed[] = new byte[1234];
@@ -351,5 +390,6 @@ public class FortunaStandalone extends BasePRNGStandalone implements Serializabl
}
long time = System.currentTimeMillis() - before;
System.out.println("512MB took " + time + ", or " + (8*64d)/((double)time/1000d) +"MBps");
*/
}
}

View File

@@ -1,7 +1,7 @@
package gnu.crypto.prng;
// ----------------------------------------------------------------------------
// $Id: IRandom.java,v 1.12 2005/10/06 04:24:17 rsdio Exp $
// $Id: IRandomStandalone.java,v 1.1 2005/10/22 13:10:00 jrandom Exp $
//
// Copyright (C) 2001, 2002, 2003 Free Software Foundation, Inc.
//
@@ -82,9 +82,9 @@ import java.util.Map;
* Menezes, A., van Oorschot, P. and S. Vanstone.</li>
* </ol>
*
* @version $Revision: 1.12 $
* @version $Revision: 1.1 $
*/
public interface IRandom extends Cloneable {
public interface IRandomStandalone extends Cloneable {
// Constants
// -------------------------------------------------------------------------
@@ -110,32 +110,32 @@ public interface IRandom extends Cloneable {
void init(Map attributes);
/**
* <p>Returns the next 8 bits of random data generated from this instance.</p>
*
* @return the next 8 bits of random data generated from this instance.
* @exception IllegalStateException if the instance is not yet initialised.
* @exception LimitReachedException if this instance has reached its
* theoretical limit for generating non-repetitive pseudo-random data.
*/
byte nextByte() throws IllegalStateException, LimitReachedException;
* <p>Returns the next 8 bits of random data generated from this instance.</p>
*
* @return the next 8 bits of random data generated from this instance.
* @exception IllegalStateException if the instance is not yet initialised.
* @exception LimLimitReachedExceptionStandalone this instance has reached its
* theoretical limit for generating non-repetitive pseudo-random data.
*/
byte nextByte() throws IllegalStateException, LimitReachedExceptionStandalone;
/**
* <p>Fills the designated byte array, starting from byte at index
* <code>offset</code>, for a maximum of <code>length</code> bytes with the
* output of this generator instance.
*
* @param out the placeholder to contain the generated random bytes.
* @param offset the starting index in <i>out</i> to consider. This method
* does nothing if this parameter is not within <code>0</code> and
* <code>out.length</code>.
* @param length the maximum number of required random bytes. This method
* does nothing if this parameter is less than <code>1</code>.
* @exception IllegalStateException if the instance is not yet initialised.
* @exception LimitReachedException if this instance has reached its
* theoretical limit for generating non-repetitive pseudo-random data.
*/
* <p>Fills the designated byte array, starting from byte at index
* <code>offset</code>, for a maximum of <code>length</code> bytes with the
* output of this generator instance.
*
* @param out the placeholder to contain the generated random bytes.
* @param offset the starting index in <i>out</i> to consider. This method
* does nothing if this parameter is not within <code>0</code> and
* <code>out.length</code>.
* @param length the maximum number of required random bytes. This method
* does nothing if this parameter is less than <code>1</code>.
* @exception IllegalStateException if the instance is not yet initialised.
* @exception LimitLimitReachedExceptionStandalonehis instance has reached its
* theoretical limit for generating non-repetitive pseudo-random data.
*/
void nextBytes(byte[] out, int offset, int length)
throws IllegalStateException, LimitReachedException;
throws IllegalStateException, LimitReachedExceptionStandalone;
/**
* <p>Supplement, or possibly replace, the random state of this PRNG with

View File

@@ -1,7 +1,7 @@
package gnu.crypto.prng;
// ----------------------------------------------------------------------------
// $Id: LimitReachedException.java,v 1.5 2005/10/06 04:24:17 rsdio Exp $
// $Id: LimitReachedExceptionStandalone.java,v 1.1 2005/10/22 13:10:00 jrandom Exp $
//
// Copyright (C) 2001, 2002, Free Software Foundation, Inc.
//
@@ -47,9 +47,9 @@ package gnu.crypto.prng;
* A checked exception that indicates that a pseudo random number generated has
* reached its theoretical limit in generating random bytes.
*
* @version $Revision: 1.5 $
* @version $Revision: 1.1 $
*/
public class LimitReachedException extends Exception {
public class LimitReachedExceptionStandalone extends Exception {
// Constants and variables
// -------------------------------------------------------------------------
@@ -57,11 +57,11 @@ public class LimitReachedException extends Exception {
// Constructor(s)
// -------------------------------------------------------------------------
public LimitReachedException() {
public LimitReachedExceptionStandalone() {
super();
}
public LimitReachedException(String msg) {
public LimitReachedExceptionStandalone(String msg) {
super(msg);
}

View File

@@ -1,4 +1,4 @@
/* RandomEventListener.java -- event listener
/* RandomEventListenerStandalone.java -- event listener
Copyright (C) 2004 Free Software Foundation, Inc.
This file is part of GNU Crypto.
@@ -47,7 +47,7 @@ import java.util.EventListener;
* An interface for entropy accumulators that will be notified of random
* events.
*/
public interface RandomEventListener extends EventListener
public interface RandomEventListenerStandalone extends EventListener
{
void addRandomEvent(RandomEvent event);
void addRandomEvent(RandomEventStandalone event);
}

View File

@@ -1,4 +1,4 @@
/* RandomEvent.java -- a random event.
/* RandomEventStandalone.java -- a random event.
Copyright (C) 2004 Free Software Foundation, Inc.
This file is part of GNU Crypto.
@@ -47,14 +47,14 @@ import java.util.EventObject;
* An interface for entropy accumulators that will be notified of random
* events.
*/
public class RandomEvent extends EventObject
public class RandomEventStandalone extends EventObject
{
private final byte sourceNumber;
private final byte poolNumber;
private final byte[] data;
public RandomEvent(Object source, byte sourceNumber, byte poolNumber,
public RandomEventStandalone(Object source, byte sourceNumber, byte poolNumber,
byte[] data)
{
super(source);

View File

@@ -14,8 +14,8 @@ package net.i2p;
*
*/
public class CoreVersion {
public final static String ID = "$Revision: 1.54 $ $Date: 2006/02/21 10:20:17 $";
public final static String VERSION = "0.6.1.12";
public final static String ID = "$Revision: 1.69 $ $Date: 2006-10-08 20:44:55 $";
public final static String VERSION = "0.6.1.27";
public static void main(String args[]) {
System.out.println("I2P Core version: " + VERSION);

View File

@@ -14,6 +14,7 @@ import net.i2p.crypto.DummyElGamalEngine;
import net.i2p.crypto.DummyPooledRandomSource;
import net.i2p.crypto.ElGamalAESEngine;
import net.i2p.crypto.ElGamalEngine;
import net.i2p.crypto.HMAC256Generator;
import net.i2p.crypto.HMACGenerator;
import net.i2p.crypto.KeyGenerator;
import net.i2p.crypto.PersistentSessionKeyManager;
@@ -67,8 +68,9 @@ public class I2PAppContext {
private AESEngine _AESEngine;
private LogManager _logManager;
private HMACGenerator _hmac;
private HMAC256Generator _hmac256;
private SHA256Generator _sha;
private Clock _clock;
protected Clock _clock; // overridden in RouterContext
private DSAEngine _dsa;
private RoutingKeyGenerator _routingKeyGenerator;
private RandomSource _random;
@@ -82,8 +84,9 @@ public class I2PAppContext {
private volatile boolean _AESEngineInitialized;
private volatile boolean _logManagerInitialized;
private volatile boolean _hmacInitialized;
private volatile boolean _hmac256Initialized;
private volatile boolean _shaInitialized;
private volatile boolean _clockInitialized;
protected volatile boolean _clockInitialized; // used in RouterContext
private volatile boolean _dsaInitialized;
private volatile boolean _routingKeyGeneratorInitialized;
private volatile boolean _randomInitialized;
@@ -353,6 +356,19 @@ public class I2PAppContext {
_hmacInitialized = true;
}
}
public HMAC256Generator hmac256() {
if (!_hmac256Initialized) initializeHMAC256();
return _hmac256;
}
private void initializeHMAC256() {
synchronized (this) {
if (_hmac256 == null) {
_hmac256 = new HMAC256Generator(this);
}
_hmac256Initialized = true;
}
}
/**
* Our SHA256 instance (see the hmac discussion for why its context specific)
@@ -411,11 +427,11 @@ public class I2PAppContext {
* enable simulators to play with clock skew among different instances.
*
*/
public Clock clock() {
public Clock clock() { // overridden in RouterContext
if (!_clockInitialized) initializeClock();
return _clock;
}
private void initializeClock() {
protected void initializeClock() { // overridden in RouterContext
synchronized (this) {
if (_clock == null)
_clock = new Clock(this);

View File

@@ -320,9 +320,9 @@ public class DHSessionKeyBuilder {
if (_myPrivateValue == null) generateMyValue();
_sessionKey = calculateSessionKey(_myPrivateValue, _peerValue);
} else {
System.err.println("Not ready yet.. privateValue and peerValue must be set ("
+ (_myPrivateValue != null ? "set" : "null") + ","
+ (_peerValue != null ? "set" : "null") + ")");
//System.err.println("Not ready yet.. privateValue and peerValue must be set ("
// + (_myPrivateValue != null ? "set" : "null") + ","
// + (_peerValue != null ? "set" : "null") + ")");
}
return _sessionKey;
}

View File

@@ -0,0 +1,51 @@
package net.i2p.crypto;
import gnu.crypto.hash.Sha256Standalone;
import net.i2p.I2PAppContext;
import net.i2p.data.Base64;
import net.i2p.data.Hash;
import net.i2p.data.SessionKey;
import org.bouncycastle.crypto.Digest;
import org.bouncycastle.crypto.macs.HMac;
/**
* Calculate the HMAC-SHA256 of a key+message. All the good stuff occurs
* in {@link org.bouncycastle.crypto.macs.HMac} and
* {@link org.bouncycastle.crypto.digests.MD5Digest}.
*
*/
public class HMAC256Generator extends HMACGenerator {
public HMAC256Generator(I2PAppContext context) { super(context); }
protected HMac acquire() {
synchronized (_available) {
if (_available.size() > 0)
return (HMac)_available.remove(0);
}
// the HMAC is hardcoded to use SHA256 digest size
// for backwards compatability. next time we have a backwards
// incompatible change, we should update this by removing ", 32"
return new HMac(new Sha256ForMAC());
}
private class Sha256ForMAC extends Sha256Standalone implements Digest {
public String getAlgorithmName() { return "sha256 for hmac"; }
public int getDigestSize() { return 32; }
public int doFinal(byte[] out, int outOff) {
byte rv[] = digest();
System.arraycopy(rv, 0, out, outOff, rv.length);
reset();
return rv.length;
}
}
public static void main(String args[]) {
I2PAppContext ctx = I2PAppContext.getGlobalContext();
byte data[] = new byte[64];
ctx.random().nextBytes(data);
SessionKey key = ctx.keyGenerator().generateSessionKey();
Hash mac = ctx.hmac256().calculate(key, data);
System.out.println(Base64.encode(mac.getData()));
}
}

View File

@@ -20,7 +20,7 @@ import org.bouncycastle.crypto.macs.HMac;
public class HMACGenerator {
private I2PAppContext _context;
/** set of available HMAC instances for calculate */
private List _available;
protected List _available;
/** set of available byte[] buffers for verify */
private List _availableTmp;
@@ -85,7 +85,7 @@ public class HMACGenerator {
return eq;
}
private HMac acquire() {
protected HMac acquire() {
synchronized (_available) {
if (_available.size() > 0)
return (HMac)_available.remove(0);

View File

@@ -9,10 +9,13 @@ package net.i2p.crypto;
*
*/
import gnu.crypto.hash.Sha256Standalone;
import java.math.BigInteger;
import net.i2p.I2PAppContext;
import net.i2p.data.Base64;
import net.i2p.data.DataHelper;
import net.i2p.data.Hash;
import net.i2p.data.PrivateKey;
import net.i2p.data.PublicKey;
import net.i2p.data.SessionKey;
@@ -53,6 +56,18 @@ public class KeyGenerator {
return key;
}
private static final int PBE_ROUNDS = 1000;
/** PBE the passphrase with the salt */
public SessionKey generateSessionKey(byte salt[], byte passphrase[]) {
byte salted[] = new byte[16+passphrase.length];
System.arraycopy(salt, 0, salted, 0, Math.min(salt.length, 16));
System.arraycopy(passphrase, 0, salted, 16, passphrase.length);
byte h[] = _context.sha().calculateHash(salted).getData();
for (int i = 1; i < PBE_ROUNDS; i++)
_context.sha().calculateHash(h, 0, Hash.HASH_LENGTH, h, 0);
return new SessionKey(h);
}
/** standard exponent size */
private static final int PUBKEY_EXPONENT_SIZE_FULL = 2048;
/**
@@ -95,7 +110,7 @@ public class KeyGenerator {
* @return the corresponding PublicKey object
*/
public static PublicKey getPublicKey(PrivateKey priv) {
BigInteger a = new NativeBigInteger(priv.toByteArray());
BigInteger a = new NativeBigInteger(1, priv.toByteArray());
BigInteger aalpha = CryptoConstants.elgg.modPow(a, CryptoConstants.elgp);
PublicKey pub = new PublicKey();
byte [] pubBytes = aalpha.toByteArray();
@@ -132,7 +147,7 @@ public class KeyGenerator {
* @return a SigningPublicKey object
*/
public static SigningPublicKey getSigningPublicKey(SigningPrivateKey priv) {
BigInteger x = new NativeBigInteger(priv.toByteArray());
BigInteger x = new NativeBigInteger(1, priv.toByteArray());
BigInteger y = CryptoConstants.dsag.modPow(x, CryptoConstants.dsap);
SigningPublicKey pub = new SigningPublicKey();
byte [] pubBytes = padBuffer(y.toByteArray(), SigningPublicKey.KEYSIZE_BYTES);

View File

@@ -67,7 +67,7 @@ class TransientSessionKeyManager extends SessionKeyManager {
_log = context.logManager().getLog(TransientSessionKeyManager.class);
_context = context;
_outboundSessions = new HashMap(1024);
_inboundTagSets = new HashMap(64*1024);
_inboundTagSets = new HashMap(1024);
context.statManager().createRateStat("crypto.sessionTagsExpired", "How many tags/sessions are expired?", "Encryption", new long[] { 10*60*1000, 60*60*1000, 3*60*60*1000 });
context.statManager().createRateStat("crypto.sessionTagsRemaining", "How many tags/sessions are remaining after a cleanup?", "Encryption", new long[] { 10*60*1000, 60*60*1000, 3*60*60*1000 });
SimpleTimer.getInstance().addEvent(new CleanupEvent(), 60*1000);

View File

@@ -9,6 +9,7 @@ package net.i2p.data;
*
*/
import gnu.crypto.hash.Sha256Standalone;
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.ByteArrayInputStream;
@@ -770,9 +771,11 @@ public class DataHelper {
* Read a newline delimited line from the stream, returning the line (without
* the newline), or null if EOF reached before the newline was found
*/
public static String readLine(InputStream in) throws IOException {
public static String readLine(InputStream in) throws IOException { return readLine(in, (Sha256Standalone)null); }
/** update the hash along the way */
public static String readLine(InputStream in, Sha256Standalone hash) throws IOException {
StringBuffer buf = new StringBuffer(128);
boolean ok = readLine(in, buf);
boolean ok = readLine(in, buf, hash);
if (ok)
return buf.toString();
else
@@ -785,8 +788,13 @@ public class DataHelper {
* newline was found
*/
public static boolean readLine(InputStream in, StringBuffer buf) throws IOException {
return readLine(in, buf, null);
}
/** update the hash along the way */
public static boolean readLine(InputStream in, StringBuffer buf, Sha256Standalone hash) throws IOException {
int c = -1;
while ( (c = in.read()) != -1) {
if (hash != null) hash.update((byte)c);
if (c == '\n')
break;
buf.append((char)c);
@@ -797,6 +805,10 @@ public class DataHelper {
return true;
}
public static void write(OutputStream out, byte data[], Sha256Standalone hash) throws IOException {
hash.update(data);
out.write(data);
}
public static List sortStructures(Collection dataStructures) {
if (dataStructures == null) return new ArrayList();
@@ -822,6 +834,8 @@ public class DataHelper {
return (ms / (60 * 1000)) + "m";
} else if (ms < 3 * 24 * 60 * 60 * 1000) {
return (ms / (60 * 60 * 1000)) + "h";
} else if (ms > 365l * 24l * 60l * 60l * 1000l) {
return "n/a";
} else {
return (ms / (24 * 60 * 60 * 1000)) + "d";
}

View File

@@ -32,6 +32,7 @@ public class PrivateKey extends DataStructureImpl {
public PrivateKey() {
setData(null);
}
public PrivateKey(byte data[]) { setData(data); }
/** constructs from base64
* @param base64Data a string of base64 data (the output of .toBase64() called

View File

@@ -31,6 +31,11 @@ public class PublicKey extends DataStructureImpl {
public PublicKey() {
setData(null);
}
public PublicKey(byte data[]) {
if ( (data == null) || (data.length != KEYSIZE_BYTES) )
throw new IllegalArgumentException("Data must be specified, and the correct size");
setData(data);
}
/** constructs from base64
* @param base64Data a string of base64 data (the output of .toBase64() called

View File

@@ -86,7 +86,7 @@ public class RouterIdentity extends DataStructureImpl {
public void writeBytes(OutputStream out) throws DataFormatException, IOException {
if ((_certificate == null) || (_publicKey == null) || (_signingKey == null))
throw new DataFormatException("Not enough data to format the destination");
throw new DataFormatException("Not enough data to format the router identity");
_publicKey.writeBytes(out);
_signingKey.writeBytes(out);
_certificate.writeBytes(out);

View File

@@ -53,6 +53,9 @@ public class RouterInfo extends DataStructureImpl {
public static final String PROP_CAPABILITIES = "caps";
public static final char CAPABILITY_HIDDEN = 'H';
// Public string of chars which serve as bandwidth capacity markers
// NOTE: individual chars defined in Router.java
public static final String BW_CAPABILITY_CHARS = "KLMNO";
public RouterInfo() {
setIdentity(null);
@@ -179,6 +182,12 @@ public class RouterInfo extends DataStructureImpl {
return (Properties) _options.clone();
}
}
public String getOption(String opt) {
if (_options == null) return null;
synchronized (_options) {
return _options.getProperty(opt);
}
}
/**
* Configure a set of options or statistics that the router can expose
@@ -334,6 +343,24 @@ public class RouterInfo extends DataStructureImpl {
return (getCapabilities().indexOf(CAPABILITY_HIDDEN) != -1);
}
/**
* Return a string representation of this node's bandwidth tier,
* or "Unknown"
*/
public String getBandwidthTier() {
String bwTiers = BW_CAPABILITY_CHARS;
String bwTier = "Unknown";
String capabilities = getCapabilities();
// Iterate through capabilities, searching for known bandwidth tier
for (int i = 0; i < capabilities.length(); i++) {
if (bwTiers.indexOf(String.valueOf(capabilities.charAt(i))) != -1) {
bwTier = String.valueOf(capabilities.charAt(i));
break;
}
}
return (bwTier);
}
public void addCapability(char cap) {
if (_options == null) _options = new OrderedProperties();
synchronized (_options) {

View File

@@ -29,6 +29,8 @@ public class BufferedStatLog implements StatLog {
private String _lastFilters;
private BufferedWriter _out;
private String _outFile;
/** short circuit for adding data, set to true if some filters are set, false if its empty (so we can skip the sync) */
private volatile boolean _filtersSpecified;
private static final int BUFFER_SIZE = 1024;
private static final boolean DISABLE_LOGGING = false;
@@ -44,6 +46,7 @@ public class BufferedStatLog implements StatLog {
_lastWrite = _events.length-1;
_statFilters = new ArrayList(10);
_flushFrequency = 500;
_filtersSpecified = false;
I2PThread writer = new I2PThread(new StatLogWriter(), "StatLogWriter");
writer.setDaemon(true);
writer.start();
@@ -51,6 +54,7 @@ public class BufferedStatLog implements StatLog {
public void addData(String scope, String stat, long value, long duration) {
if (DISABLE_LOGGING) return;
if (!shouldLog(stat)) return;
synchronized (_events) {
_events[_eventNext].init(scope, stat, value, duration);
_eventNext = (_eventNext + 1) % _events.length;
@@ -72,6 +76,7 @@ public class BufferedStatLog implements StatLog {
}
private boolean shouldLog(String stat) {
if (!_filtersSpecified) return false;
synchronized (_statFilters) {
return _statFilters.contains(stat) || _statFilters.contains("*");
}
@@ -88,11 +93,18 @@ public class BufferedStatLog implements StatLog {
_statFilters.clear();
while (tok.hasMoreTokens())
_statFilters.add(tok.nextToken().trim());
if (_statFilters.size() > 0)
_filtersSpecified = true;
else
_filtersSpecified = false;
}
}
_lastFilters = val;
} else {
synchronized (_statFilters) { _statFilters.clear(); }
synchronized (_statFilters) {
_statFilters.clear();
_filtersSpecified = false;
}
}
String filename = _context.getProperty(StatManager.PROP_STAT_FILE);
@@ -146,7 +158,7 @@ public class BufferedStatLog implements StatLog {
updateFilters();
int cur = start;
while (cur != end) {
if (shouldLog(_events[cur].getStat())) {
//if (shouldLog(_events[cur].getStat())) {
String when = null;
synchronized (_fmt) {
when = _fmt.format(new Date(_events[cur].getTime()));
@@ -164,7 +176,7 @@ public class BufferedStatLog implements StatLog {
_out.write(" ");
_out.write(Long.toString(_events[cur].getDuration()));
_out.write("\n");
}
//}
cur = (cur + 1) % _events.length;
}
_out.flush();

View File

@@ -26,6 +26,8 @@ public class Rate {
private volatile double _lifetimeTotalValue;
private volatile long _lifetimeEventCount;
private volatile long _lifetimeTotalEventTime;
private RateSummaryListener _summaryListener;
private RateStat _stat;
private volatile long _lastCoalesceDate;
private long _creationDate;
@@ -108,6 +110,9 @@ public class Rate {
public long getPeriod() {
return _period;
}
public RateStat getRateStat() { return _stat; }
public void setRateStat(RateStat rs) { _stat = rs; }
/**
*
@@ -175,12 +180,16 @@ public class Rate {
}
}
/** 2s is plenty of slack to deal with slow coalescing (across many stats) */
private static final int SLACK = 2000;
public void coalesce() {
long now = now();
synchronized (_lock) {
long measuredPeriod = now - _lastCoalesceDate;
if (measuredPeriod < _period) {
// no need to coalesce
if (measuredPeriod < _period - SLACK) {
// no need to coalesce (assuming we only try to do so once per minute)
if (_log.shouldLog(Log.WARN))
_log.warn("not coalescing, measuredPeriod = " + measuredPeriod + " period = " + _period);
return;
}
@@ -189,7 +198,7 @@ public class Rate {
// how much were we off by? (so that we can sample down the measured values)
double periodFactor = measuredPeriod / (double)_period;
_lastTotalValue = _currentTotalValue / periodFactor;
_lastEventCount = (long) (_currentEventCount / periodFactor);
_lastEventCount = (long) ( (_currentEventCount + periodFactor - 1) / periodFactor);
_lastTotalEventTime = (long) (_currentTotalEventTime / periodFactor);
_lastCoalesceDate = now;
@@ -203,8 +212,13 @@ public class Rate {
_currentEventCount = 0;
_currentTotalEventTime = 0;
}
if (_summaryListener != null)
_summaryListener.add(_lastTotalValue, _lastEventCount, _lastTotalEventTime, _period);
}
public void setSummaryListener(RateSummaryListener listener) { _summaryListener = listener; }
public RateSummaryListener getSummaryListener() { return _summaryListener; }
/** what was the average value across the events in the last period? */
public double getAverageValue() {
if ((_lastTotalValue != 0) && (_lastEventCount > 0))
@@ -419,6 +433,7 @@ public class Rate {
public boolean equals(Object obj) {
if ((obj == null) || (obj.getClass() != Rate.class)) return false;
if (obj == this) return true;
Rate r = (Rate) obj;
return _period == r.getPeriod() && _creationDate == r.getCreationDate() &&
//_lastCoalesceDate == r.getLastCoalesceDate() &&

View File

@@ -27,8 +27,10 @@ public class RateStat {
_description = description;
_groupName = group;
_rates = new Rate[periods.length];
for (int i = 0; i < periods.length; i++)
for (int i = 0; i < periods.length; i++) {
_rates[i] = new Rate(periods[i]);
_rates[i].setRateStat(this);
}
}
public void setStatLog(StatLog sl) { _statLog = sl; }
@@ -159,6 +161,7 @@ public class RateStat {
_rates[i].load(props, curPrefix, treatAsCurrent);
} catch (IllegalArgumentException iae) {
_rates[i] = new Rate(period);
_rates[i].setRateStat(this);
if (_log.shouldLog(Log.WARN))
_log.warn("Rate for " + prefix + " is corrupt, reinitializing that period");
}

View File

@@ -0,0 +1,14 @@
package net.i2p.stat;
/**
* Receive the state of the rate when its coallesced
*/
public interface RateSummaryListener {
/**
* @param totalValue sum of all event values in the most recent period
* @param eventCount how many events occurred
* @param totalEventTime how long the events were running for
* @param period how long this period is
*/
void add(double totalValue, long eventCount, double totalEventTime, long period);
}

View File

@@ -43,7 +43,7 @@ public class StatManager {
_log = context.logManager().getLog(StatManager.class);
_context = context;
_frequencyStats = Collections.synchronizedMap(new HashMap(128));
_rateStats = Collections.synchronizedMap(new HashMap(128));
_rateStats = new HashMap(128); // synchronized only on add //Collections.synchronizedMap(new HashMap(128));
_statLog = new BufferedStatLog(context);
}
@@ -80,10 +80,12 @@ public class StatManager {
* @param periods array of period lengths (in milliseconds)
*/
public void createRateStat(String name, String description, String group, long periods[]) {
if (_rateStats.containsKey(name)) return;
RateStat rs = new RateStat(name, description, group, periods);
if (_statLog != null) rs.setStatLog(_statLog);
_rateStats.put(name, rs);
synchronized (_rateStats) {
if (_rateStats.containsKey(name)) return;
RateStat rs = new RateStat(name, description, group, periods);
if (_statLog != null) rs.setStatLog(_statLog);
_rateStats.put(name, rs);
}
}
/** update the given frequency statistic, taking note that an event occurred (and recalculating all frequencies) */
@@ -94,7 +96,7 @@ public class StatManager {
/** update the given rate statistic, taking note that the given data point was received (and recalculating all rates) */
public void addRateData(String name, long data, long eventDuration) {
RateStat stat = (RateStat) _rateStats.get(name);
RateStat stat = (RateStat) _rateStats.get(name); // unsynchronized
if (stat != null) stat.addData(data, eventDuration);
}

View File

@@ -112,11 +112,19 @@ public class NtpClient {
// Process response
NtpMessage msg = new NtpMessage(packet.getData());
double roundTripDelay = (destinationTimestamp-msg.originateTimestamp) -
(msg.receiveTimestamp-msg.transmitTimestamp);
double localClockOffset = ((msg.receiveTimestamp - msg.originateTimestamp) +
(msg.transmitTimestamp - destinationTimestamp)) / 2;
socket.close();
// Stratum must be between 1 (atomic) and 15 (maximum defined value)
// Anything else is right out, treat such responses like errors
if ((msg.stratum < 1) || (msg.stratum > 15)) {
//System.out.println("Response from NTP server of unacceptable stratum " + msg.stratum + ", failing.");
return(-1);
}
long rv = (long)(System.currentTimeMillis() + localClockOffset*1000);
//System.out.println("host: " + address.getHostAddress() + " rtt: " + roundTripDelay + " offset: " + localClockOffset + " seconds");

View File

@@ -26,7 +26,7 @@ public class Timestamper implements Runnable {
private boolean _initialized;
private static final int DEFAULT_QUERY_FREQUENCY = 5*60*1000;
private static final String DEFAULT_SERVER_LIST = "pool.ntp.org, pool.ntp.org, pool.ntp.org";
private static final String DEFAULT_SERVER_LIST = "0.pool.ntp.org, 1.pool.ntp.org, 2.pool.ntp.org";
private static final boolean DEFAULT_DISABLED = true;
/** how many times do we have to query if we are changing the clock? */
private static final int DEFAULT_CONCURRING_SERVERS = 3;

View File

@@ -13,12 +13,16 @@ import net.i2p.time.Timestamper;
* between the local computer's current time and the time as known by some reference
* (such as an NTP synchronized clock).
*
* Protected members are used in the subclass RouterClock,
* which has access to a router's transports (particularly peer clock skews)
* to second-guess the sanity of clock adjustments.
*
*/
public class Clock implements Timestamper.UpdateListener {
private I2PAppContext _context;
protected I2PAppContext _context;
private Timestamper _timestamper;
private long _startedOn;
private boolean _statCreated;
protected long _startedOn;
protected boolean _statCreated;
public Clock(I2PAppContext context) {
_context = context;
@@ -36,10 +40,10 @@ public class Clock implements Timestamper.UpdateListener {
public Timestamper getTimestamper() { return _timestamper; }
/** we fetch it on demand to avoid circular dependencies (logging uses the clock) */
private Log getLog() { return _context.logManager().getLog(Clock.class); }
protected Log getLog() { return _context.logManager().getLog(Clock.class); }
private volatile long _offset;
private boolean _alreadyChanged;
protected volatile long _offset;
protected boolean _alreadyChanged;
private Set _listeners;
/** if the clock is skewed by 3+ days, fuck 'em */
@@ -132,7 +136,7 @@ public class Clock implements Timestamper.UpdateListener {
}
}
private void fireOffsetChanged(long delta) {
protected void fireOffsetChanged(long delta) {
synchronized (_listeners) {
for (Iterator iter = _listeners.iterator(); iter.hasNext();) {
ClockUpdateListener lsnr = (ClockUpdateListener) iter.next();

View File

@@ -13,6 +13,7 @@ import net.i2p.util.Log;
public class EepPost {
private I2PAppContext _context;
private Log _log;
private static final String CRLF = "\r\n";
public EepPost() {
this(I2PAppContext.getGlobalContext());
@@ -65,6 +66,7 @@ public class EepPost {
_onCompletion = onCompletion;
}
public void run() {
if (_log.shouldLog(Log.DEBUG)) _log.debug("Running the post task");
Socket s = null;
try {
URL u = new URL(_url);
@@ -81,17 +83,20 @@ public class EepPost {
_proxyPort = p;
}
if (_log.shouldLog(Log.DEBUG)) _log.debug("Connecting to the server/proxy...");
s = new Socket(_proxyHost, _proxyPort);
if (_log.shouldLog(Log.DEBUG)) _log.debug("Connected");
OutputStream out = s.getOutputStream();
String sep = getSeparator();
long length = calcContentLength(sep, _fields);
if (_log.shouldLog(Log.DEBUG)) _log.debug("Separator: " + sep + " content length: " + length);
String header = getHeader(isProxy, path, h, p, sep, length);
if (_log.shouldLog(Log.DEBUG))
_log.debug("Header: \n" + header);
out.write(header.getBytes());
out.flush();
if (false) {
out.write(("--" + sep + "\ncontent-disposition: form-data; name=\"field1\"\n\nStuff goes here\n--" + sep + "--\n").getBytes());
out.write(("--" + sep + CRLF + "content-disposition: form-data; name=\"field1\"" + CRLF + CRLF + "Stuff goes here" + CRLF + "--" + sep + "--" + CRLF).getBytes());
} else {
sendFields(out, sep, _fields);
}
@@ -121,18 +126,18 @@ public class EepPost {
Object val = fields.get(key);
if (val instanceof File) {
File f = (File)val;
len += ("--" + sep + "\nContent-Disposition: form-data; name=\"" + key + "\"; filename=\"" + f.getName() + "\"\n").length();
len += ("--" + sep + CRLF + "Content-Disposition: form-data; name=\"" + key + "\"; filename=\"" + f.getName() + "\"" + CRLF).length();
//len += ("Content-length: " + f.length() + "\n").length();
len += ("Content-Type: application/octet-stream\n\n").length();
len += ("Content-Type: application/octet-stream" + CRLF + CRLF).length();
len += f.length();
len += 1; // nl
len += CRLF.length(); // nl
} else {
len += ("--" + sep + "\nContent-Disposition: form-data; name=\"" + key + "\"\n\n").length();
len += ("--" + sep + CRLF + "Content-Disposition: form-data; name=\"" + key + "\"" + CRLF + CRLF).length();
len += val.toString().length();
len += 1; // nl
len += CRLF.length(); // nl
}
}
len += 2 + sep.length() + 2;
len += 2 + sep.length() + 2 + CRLF.length(); //2 + sep.length() + 2;
//len += 2;
return len;
}
@@ -145,29 +150,29 @@ public class EepPost {
else
sendField(out, separator, field, val.toString());
}
out.write(("--" + separator + "--\n").getBytes());
out.write(("--" + separator + "--" + CRLF).getBytes());
}
private void sendFile(OutputStream out, String separator, String field, File file) throws IOException {
long len = file.length();
out.write(("--" + separator + "\n").getBytes());
out.write(("Content-Disposition: form-data; name=\"" + field + "\"; filename=\"" + file.getName() + "\"\n").getBytes());
out.write(("--" + separator + CRLF).getBytes());
out.write(("Content-Disposition: form-data; name=\"" + field + "\"; filename=\"" + file.getName() + "\"" + CRLF).getBytes());
//out.write(("Content-length: " + len + "\n").getBytes());
out.write(("Content-Type: application/octet-stream\n\n").getBytes());
out.write(("Content-Type: application/octet-stream" + CRLF + CRLF).getBytes());
FileInputStream in = new FileInputStream(file);
byte buf[] = new byte[1024];
int read = -1;
while ( (read = in.read(buf)) != -1)
out.write(buf, 0, read);
out.write("\n".getBytes());
out.write(CRLF.getBytes());
in.close();
}
private void sendField(OutputStream out, String separator, String field, String val) throws IOException {
out.write(("--" + separator + "\n").getBytes());
out.write(("Content-Disposition: form-data; name=\"" + field + "\"\n\n").getBytes());
out.write(("--" + separator + CRLF).getBytes());
out.write(("Content-Disposition: form-data; name=\"" + field + "\"" + CRLF + CRLF).getBytes());
out.write(val.getBytes());
out.write("\n".getBytes());
out.write(CRLF.getBytes());
}
private String getHeader(boolean isProxy, String path, String host, int port, String separator, long length) {
@@ -179,16 +184,16 @@ public class EepPost {
buf.append(":").append(port);
}
buf.append(path);
buf.append(" HTTP/1.1\n");
buf.append(" HTTP/1.1" + CRLF);
buf.append("Host: ").append(host);
if (port != 80)
buf.append(":").append(port);
buf.append("\n");
buf.append("Connection: close\n");
buf.append("Content-length: ").append(length).append("\n");
buf.append(CRLF);
buf.append("Connection: close" + CRLF);
buf.append("Content-length: ").append(length).append(CRLF);
buf.append("Content-type: multipart/form-data, boundary=").append(separator);
buf.append("\n");
buf.append("\n");
buf.append(CRLF);
buf.append(CRLF);
return buf.toString();
}

View File

@@ -0,0 +1,51 @@
package net.i2p.util;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.TreeMap;
import net.i2p.I2PAppContext;
class Executor implements Runnable {
private I2PAppContext _context;
private Log _log;
private List _readyEvents;
public Executor(I2PAppContext ctx, Log log, List events) {
_context = ctx;
_readyEvents = events;
}
public void run() {
while (true) {
SimpleTimer.TimedEvent evt = null;
synchronized (_readyEvents) {
if (_readyEvents.size() <= 0)
try { _readyEvents.wait(); } catch (InterruptedException ie) {}
if (_readyEvents.size() > 0)
evt = (SimpleTimer.TimedEvent)_readyEvents.remove(0);
}
if (evt != null) {
long before = _context.clock().now();
try {
evt.timeReached();
} catch (Throwable t) {
log("wtf, event borked: " + evt, t);
}
long time = _context.clock().now() - before;
if ( (time > 1000) && (_log != null) && (_log.shouldLog(Log.WARN)) )
_log.warn("wtf, event execution took " + time + ": " + evt);
}
}
}
private void log(String msg, Throwable t) {
synchronized (this) {
if (_log == null)
_log = I2PAppContext.getGlobalContext().logManager().getLog(SimpleTimer.class);
}
_log.log(Log.CRIT, msg, t);
}
}

View File

@@ -14,7 +14,7 @@ import java.security.SecureRandom;
import net.i2p.I2PAppContext;
import net.i2p.crypto.EntropyHarvester;
import gnu.crypto.prng.FortunaStandalone;
import gnu.crypto.prng.AsyncFortunaStandalone;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
@@ -26,13 +26,13 @@ import java.io.IOException;
*
*/
public class FortunaRandomSource extends RandomSource implements EntropyHarvester {
private FortunaStandalone _fortuna;
private AsyncFortunaStandalone _fortuna;
private double _nextGaussian;
private boolean _haveNextGaussian;
public FortunaRandomSource(I2PAppContext context) {
super(context);
_fortuna = new FortunaStandalone();
_fortuna = new AsyncFortunaStandalone();
byte seed[] = new byte[1024];
if (initSeed(seed)) {
_fortuna.seed(seed);
@@ -41,6 +41,7 @@ public class FortunaRandomSource extends RandomSource implements EntropyHarveste
sr.nextBytes(seed);
_fortuna.seed(seed);
}
_fortuna.startup();
// kickstart it
_fortuna.nextBytes(seed);
_haveNextGaussian = false;
@@ -76,15 +77,33 @@ public class FortunaRandomSource extends RandomSource implements EntropyHarveste
if (n<=0)
throw new IllegalArgumentException("n must be positive");
if ((n & -n) == n) // i.e., n is a power of 2
return (int)((n * (long)nextBits(31)) >> 31);
////
// this shortcut from sun's docs neither works nor is necessary.
//
//if ((n & -n) == n) {
// // i.e., n is a power of 2
// return (int)((n * (long)nextBits(31)) >> 31);
//}
int bits, val;
do {
bits = nextBits(31);
val = bits % n;
} while(bits - val + (n-1) < 0);
return val;
int numBits = 0;
int remaining = n;
int rv = 0;
while (remaining > 0) {
remaining >>= 1;
rv += nextBits(8) << numBits*8;
numBits++;
}
if (rv < 0)
rv += n;
return rv % n;
//int bits, val;
//do {
// bits = nextBits(31);
// val = bits % n;
//} while(bits - val + (n-1) < 0);
//
//return val;
}
/**
@@ -157,11 +176,16 @@ public class FortunaRandomSource extends RandomSource implements EntropyHarveste
* through 2^numBits-1
*/
protected synchronized int nextBits(int numBits) {
int rv = 0;
long rv = 0;
int bytes = (numBits + 7) / 8;
for (int i = 0; i < bytes; i++)
rv += ((_fortuna.nextByte() & 0xFF) << i*8);
return rv;
//rv >>>= (64-numBits);
if (rv < 0)
rv = 0 - rv;
int off = 8*bytes - numBits;
rv >>>= off;
return (int)rv;
}
public EntropyHarvester harvester() { return this; }
@@ -175,4 +199,26 @@ public class FortunaRandomSource extends RandomSource implements EntropyHarveste
public synchronized void feedEntropy(String source, byte[] data, int offset, int len) {
_fortuna.addRandomBytes(data, offset, len);
}
public static void main(String args[]) {
try {
RandomSource rand = I2PAppContext.getGlobalContext().random();
if (true) {
for (int i = 0; i < 1000; i++)
if (rand.nextFloat() < 0)
throw new RuntimeException("negative!");
System.out.println("All positive");
return;
}
java.io.ByteArrayOutputStream baos = new java.io.ByteArrayOutputStream();
java.util.zip.GZIPOutputStream gos = new java.util.zip.GZIPOutputStream(baos);
for (int i = 0; i < 1024*1024; i++) {
int c = rand.nextInt(256);
gos.write((byte)c);
}
gos.finish();
byte compressed[] = baos.toByteArray();
System.out.println("Compressed size of 1MB: " + compressed.length);
} catch (Exception e) { e.printStackTrace(); }
}
}

View File

@@ -48,6 +48,12 @@ public class I2PThread extends Thread {
if ( (_log == null) || (_log.shouldLog(Log.DEBUG)) )
_createdBy = new Exception("Created by");
}
public I2PThread(Runnable r, String name, boolean isDaemon) {
super(r, name);
setDaemon(isDaemon);
if ( (_log == null) || (_log.shouldLog(Log.DEBUG)) )
_createdBy = new Exception("Created by");
}
private void log(int level, String msg) { log(level, msg, null); }
private void log(int level, String msg, Throwable t) {
@@ -113,4 +119,4 @@ public class I2PThread extends Thread {
} catch (Throwable tt) { // nop
}
}
}
}

View File

@@ -91,7 +91,7 @@ public class NativeBigInteger extends BigInteger {
* initialization? this would otherwise use the Log component, but this makes
* it easier for other systems to reuse this class
*/
private static final boolean _doLog = true;
private static final boolean _doLog = System.getProperty("jbigi.dontLog") == null;
private final static String JBIGI_OPTIMIZATION_K6 = "k6";
private final static String JBIGI_OPTIMIZATION_K6_2 = "k62";

View File

@@ -31,14 +31,14 @@ public class SimpleTimer {
_context = I2PAppContext.getGlobalContext();
_log = _context.logManager().getLog(SimpleTimer.class);
_events = new TreeMap();
_eventTimes = new HashMap(1024);
_eventTimes = new HashMap(256);
_readyEvents = new ArrayList(4);
I2PThread runner = new I2PThread(new SimpleTimerRunner());
runner.setName(name);
runner.setDaemon(true);
runner.start();
for (int i = 0; i < 3; i++) {
I2PThread executor = new I2PThread(new Executor());
I2PThread executor = new I2PThread(new Executor(_context, _log, _readyEvents));
executor.setName(name + "Executor " + i);
executor.setDaemon(true);
executor.start();
@@ -114,7 +114,7 @@ public class SimpleTimer {
long timeToAdd = System.currentTimeMillis() - now;
if (timeToAdd > 50) {
if (_log.shouldLog(Log.WARN))
_log.warn("timer contention: took " + timeToAdd + "ms to add a job");
_log.warn("timer contention: took " + timeToAdd + "ms to add a job with " + totalEvents + " queued");
}
}
@@ -141,14 +141,6 @@ public class SimpleTimer {
public void timeReached();
}
private void log(String msg, Throwable t) {
synchronized (this) {
if (_log == null)
_log = I2PAppContext.getGlobalContext().logManager().getLog(SimpleTimer.class);
}
_log.log(Log.CRIT, msg, t);
}
private long _occurredTime;
private long _occurredEventCount;
private TimedEvent _recentEvents[] = new TimedEvent[5];
@@ -228,30 +220,5 @@ public class SimpleTimer {
}
}
}
private class Executor implements Runnable {
public void run() {
while (true) {
TimedEvent evt = null;
synchronized (_readyEvents) {
if (_readyEvents.size() <= 0)
try { _readyEvents.wait(); } catch (InterruptedException ie) {}
if (_readyEvents.size() > 0)
evt = (TimedEvent)_readyEvents.remove(0);
}
if (evt != null) {
long before = _context.clock().now();
try {
evt.timeReached();
} catch (Throwable t) {
log("wtf, event borked: " + evt, t);
}
long time = _context.clock().now() - before;
if ( (time > 1000) && (_log != null) && (_log.shouldLog(Log.WARN)) )
_log.warn("wtf, event execution took " + time + ": " + evt);
}
}
}
}
}

View File

@@ -1,4 +1,710 @@
$Id: history.txt,v 1.421 2006/02/26 16:30:58 jrandom Exp $
$Id: history.txt,v 1.550 2007-02-14 16:35:43 jrandom Exp $
* 2007-02-15 0.6.1.27 released
2007-02-15 jrandom
* Limit the whispering floodfill sends to at most 3 randomly
chosen from the known floodfill peers
2007-02-14 jrandom
* Don't filter out KICK and H(ide oper status) IRC messages
(thanks Takk and postman!)
2007-02-13 jrandom
* Tell our peers about who we know in the floodfill netDb every
6 hours or so, mitigating the situation where peers lose track
of floodfill routers.
* Disable the Syndie updater (people should use the new Syndie,
not this one)
* Disable the eepsite tunnel by default
2007-01-30 zzz
* i2psnark: Don't hold _snarks lock while checking a snark,
so web page is responsive at startup
2007-01-29 zzz
* i2psnark: Add NickyB tracker
2007-01-28 zzz
* i2psnark: Don't hold sendQueue lock while flushing output,
to make everything run smoother
2007-01-27 zzz
* i2psnark: Fix orphaned Snark reader tasks leading to OOMs
2007-01-20 Complication
* Drop overlooked comment
2007-01-20 Complication
* Modify ReseedHandler to query the "i2p.reseedURL" property from I2PAppContext
instead of System, so setting a reseed URL in advanced configuration has effect.
* Clean out obsolete reseed code from ConfigNetHandler.
2007-01-20 zzz
* i2psnark: More choking rotation tweaks
* Improve performance by not reading in the whole
piece from disk for each request. A huge memory savings
on 1MB torrents with many peers.
2007-01-17 zzz
* Add new HTTP Proxy error message for non-http protocols
2007-01-17 zzz
* Add note on Syndie index.html steering people to new Syndie
2007-01-16 zzz
* i2psnark: Fix crash when autostart off and
tcrrent started manually
2007-01-16 zzz
* i2psnark: Fix bug caused by last i2psnark checkin
(ConnectionAcceptor not started)
* Don't start PeerCoordinator, ConnectionAcceptor,
and TrackerClient unless starting torrent
2007-01-15 jrandom
* small guard against unnecessary streaming lib reset packets
(thanks Complication!)
2007-01-15 zzz
* i2psnark: Add 'Stop All' link on web page
* Add some links to trackers and forum on web page
* Don't start tunnel if 'Autostart' unchecked
* Fix torrent restart bug by reopening file descriptors
2007-01-14 zzz
* i2psnark: Improvements for torrents with > 4 leechers:
choke based on upload rate when seeding, and
be smarter and fairer about rotating choked peers.
* Handle two common i2psnark OOM situations rather
than shutting down the whole thing.
* Fix reporting to tracker of remaining bytes for
torrents > 4GB (but ByteMonsoon still has a bug)
2006-10-29 zzz
* i2psnark: Fix and enable generation of multifile torrents,
print error if no tracker selected at create-torrent,
fix stopping a torrent that hasn't started successfully,
add eBook and GayTorrents trackers to form,
web page formatting tweaks
* 2006-10-10 0.6.1.26 released
2006-10-29 Complication
* Ensure we get NTP samples from more diverse sources
(0.pool.ntp.org, 1.pool.ntp.org, etc)
* Discard median-based peer skew calculator as framed average works,
and adjusting its percentage can make it behave median-like
* Require more data points (from at least 20 peers)
before considering a peer skew measurement reliable
2006-10-10 jrandom
* Removed the status display from the console, as its more confusing
than informative (though the content is still displayed in the HTML)
2006-10-08 Complication
* Add a framed average peer clock skew calculator
* Add config property "router.clockOffsetSanityCheck" to determine
if NTP-suggested clock offsets get sanity checked (default "true")
* Reject NTP-suggested clock offsets if they'd increase peer clock skew
by more than 5 seconds, or make it more than 20 seconds total
* Decrease log level in getMedianPeerClockSkew()
2006-09-29 zzz
* i2psnark: Second try at synchronization fix - synch addRequest()
completely rather than just portions of it and requestNextPiece()
2006-09-27 jrandom
* added HMAC-SHA256
* properly use CRLF with EepPost
* suppress jbigi/jcpuid messages if jbigi.dontLog/jcpuid.dontLog is set
* PBE session key generation (with 1000 rounds of SHA256)
* misc SDK helper functions
2006-09-26 Complication
* Take back another inadverent logging change in NTCPConnection
2006-09-26 Complication
* Take back an accidental log level change
2006-09-26 Complication
* Subclass from Clock a RouterClock which can access router transports,
with the goal of developing it to second-guess NTP results
* Make transports report clock skew in seconds
* Adjust renderStatusHTML() methods accordingly
* Show average for NTCP clock skews too
* Give transports a getClockSkews() method to report clock skews
* Give transport manager a getClockSkews() method to aggregate results
* Give comm system facade a getMedianPeerClockSkew() method which RouterClock calls
(to observe results, add "net.i2p.router.transport.CommSystemFacadeImpl=WARN" to logging)
* Extra explicitness in NTCP classes to denote unit of time.
* Fix some places in NTCPConnection where milliseconds and seconds were confused
2006-09-25 zzz
* i2psnark: Paranoid copy before writing pieces,
recheck files on completion, redownload bad pieces
* i2psnark: Don't contact tracker as often when seeding
2006-09-24 zzz
* i2psnark: Add some synchronization to prevent rare problem
after restoring orphan piece
2006-09-20 zzz
* i2psnark: Eliminate duplicate requests caused by i2p-bt's
rapid choke/unchokes
* i2psnark: Truncate long TrackerErr messages on web page
2006-09-16 zzz
* i2psnark: Implement retransmission of requests. This
eliminates one cause of complete stalls with a peer.
This problem is common on torrents with a small number of
active peers where there are no choke/unchokes to kickstart things.
2006-09-13 zzz
* i2psnark: Fix restoral of partial pieces broken by last patch
2006-09-13 zzz
* i2psnark: Mark a peer's requests as unrequested on disconnect,
preventing premature end game
* i2psnark: Randomize selection of next piece during end game
* i2psnark: Don't restore a partial piece to a peer that is already working on it
* i2psnark: strip ".torrent" on web page
* i2psnark: Limit piece size in generated torrent to 1MB max
2006-09-09 zzz
* i2psnark: Add "Stalled" indication and stat totals on web page
2006-09-09 zzz
* i2psnark: Fix bug where new peers would always be sent an "interested"
regardless of actual interest
* i2psnark: Reduce max piece size from 10MB to 1MB; larger may have severe
memory and efficiency problems
* 2006-09-09 0.6.1.25 released
2006-09-08 jrandom
* Tweak the PRNG logging so it only displays error messages if there are
problems
* Disable dynamic router keys for the time being, as they don't offer
meaningful security, may hurt the router, and makes it harder to
determine the network health. The code to restart on SSU IP change is
still enabled however.
* Disable tunnel load testing, leaning back on the tiered selection for
the time being.
* Spattering of bugfixes
2006-09-07 zzz
* i2psnark: Increase output timeout from 2 min to 4 min
* i2psnark: Orphan debug msg cleanup
* i2psnark: More web rate report cleanup
2006-09-05 zzz
* i2psnark: Implement basic partial-piece saves across connections
* i2psnark: Implement keep-alive sending. This will keep non-i2psnark clients
from dropping us for inactivity but also renders the 2-minute transmit-inactivity
code in i2psnark ineffective. Will have to research why there is transmit but
not receive inactivity code. With the current connection limit of 24 peers
we aren't in any danger of keeping out new peers by keeping inactive ones.
* i2psnark: Increase CHECK_PERIOD from 20 to 40 since nothing happens in 20 seconds
* i2psnark: Fix dropped chunk handling
* i2psnark: Web rate report cleanup
2006-09-04 zzz
* i2psnark: Report cleared trackerErr immediately
* i2psnark: Add trackerErr reporting after previous success; retry more quickly
* i2psnark: Set up new connections more quickly
* i2psnark: Don't delay tracker fetch when setting up lots of connections
* i2psnark: Reduce MAX_UPLOADERS from 12 to 4
2006-09-04 zzz
* Enable pipelining in i2psnark
* Make i2psnark tunnel default be 1 + 0-1
2006-09-03 zzz
* Add rate reporting to i2psnark
2006-09-03 Complication
* Limit form size in SusiDNS to avoid exceeding a POST size limit on postback
* Print messages about addressbook size to give better overview
* Enable delete function in published addressbook
2006-08-21 Complication
* Fix error reporting discrepancy (thanks for helping notice, yojoe!)
2006-08-03 jrandom
* Decrease the recently modified tunnel building timeout, though keep
the scaling on their processing
2006-07-31 jrandom
* Increase the tunnel building timeout
* Avoid a rare race (thanks bar!)
* Fix the bandwidth capacity publishing code to factor in share percentage
and outbound throttling (oops)
2006-07-29 Complication
* Treat NTP responses from unexpected stratums like failures
* 2006-07-28 0.6.1.24 released
2006-07-28 jrandom
* Don't try to reverify too many netDb entries at once (thanks
cervantes and Complication!)
2006-07-28 jrandom
* Actually fix the threading deadlock issue in the netDb (removing
the synchronized access to individual kbuckets while validating
individual entries) (thanks cervantes, postman, frosk, et al!)
* 2006-07-27 0.6.1.23 released
2006-07-27 jrandom
* Cut down NTCP connection establishments once we know the peer is skewed
(rather than wait for full establishment before verifying)
* Removed a lock on the stats framework when accessing rates, which
shouldn't be a problem, assuming rates are created (pretty much) all at
once and merely updated during the lifetime of the jvm.
2006-07-27 jrandom
* Further NTCP write status cleanup
* Handle more oddly-timed NTCP disconnections (thanks bar!)
2006-07-26 jrandom
* When dropping a netDb router reference, only accept newer
references as part of the update check
* If we have been up for a while, don't accept really old
router references (published 2 or more days ago)
* Drop router references once they are no longer valid, even if
they were allowed in due to the lax restrictions on startup
2006-07-26 jrandom
* Every time we create a new router identity, add an entry to the
new "identlog.txt" text file in the I2P install directory. For
debugging purposes, publish the count of how many identities the
router has cycled through, though not the identities itself.
* Cleaned up the way the multitransport shitlisting worked, and
added per-transport shitlists
* When dropping a router reference locally, first fire a netDb
lookup for the entry
* Take the peer selection filters into account when organizing the
profiles (thanks Complication!)
* Avoid some obvious configuration errors for the NTCP transport
(invalid ports, "null" ip, etc)
* Deal with some small NTCP bugs found in the wild (unresolveable
hosts, strange network discons, etc)
* Send our netDb info to peers we have direct NTCP connections to
after each 6-12 hours of connection uptime
* Clean up the NTCP reading and writing queue logic to avoid some
potential delays
* Allow people to specify the IP that the SSU transport binds on
locally, via the advanced config "i2np.udp.bindInterface=1.2.3.4"
* 2006-07-18 0.6.1.22 released
2006-07-18 jrandom
* Add a failsafe to the NTCP transport to make sure we keep
pumping writes when we should.
* Properly reallow 16-32KBps routers in the default config
(thanks Complication!)
2006-07-16 Complication
* Collect tunnel build agree/reject/expire statistics
for each bandwidth tier of peers (and peers of unknown tiers,
even if those shouldn't exist)
2006-07-14 jrandom
* Improve the multitransport shitlisting (thanks Complication!)
* Allow routers with a capacity of 16-32KBps to be used in tunnels under
the default configuration (thanks for the stats Complication!)
* Properly allow older router references to load on startup
(thanks bar, Complication, et al!)
* Add a new "i2p.alwaysAllowReseed" advanced config property, though
hopefully today's changes should make this unnecessary (thanks void!)
* Improved NTCP buffering
* Close NTCP connections if we are too backlogged when writing to them
2006-07-04 jrandom
* New NIO-based tcp transport (NTCP), enabled by default for outbound
connections only. Those who configure their NAT/firewall to allow
inbound connections and specify the external host and port
(dyndns/etc is ok) on /config.jsp can receive inbound connections.
SSU is still enabled for use by default for all users as a fallback.
* Substantial bugfix to the tunnel gateway processing to transfer
messages sequentially instead of interleaved
* Renamed GNU/crypto classes to avoid name clashes with kaffe and other
GNU/Classpath based JVMs
* Adjust the Fortuna PRNG's pooling system to reduce contention on
refill with a background thread to refill the output buffer
* Add per-transport support for the shitlist
* Add a new async pumped tunnel gateway to reduce tunnel dispatcher
contention
2006-07-01 Complication
* Ensure that the I2PTunnel web interface won't update tunnel settings
for shared clients when a non-shared client is modified
(thanks for spotting, BarkerJr!)
2006-06-14 cervantes
* Small tweak to I2PTunnel CSS, so it looks better with desktops
that use Bitstream Vera fonts @ 96 dpi
* 2006-06-14 0.6.1.21 released
2006-06-13 jrandom
* Use a minimum uptime of 2 hours, not 4 (oops)
2006-06-13 jrandom
* Cut down the proactive rejections due to queue size - if we are
at the point of having decrypted the request off the queue, might
as well let it through, rather than waste that decryption
2006-06-11 Kloug
* Bugfix to the I2PTunnel IRC filter to support multiple concurrent
outstanding pings/pongs
2006-06-10 jrandom
* Further reduction in proactive rejections
2006-06-09 jrandom
* Don't let the pending tunnel request queue grow beyond reason
(letting things sit for up to 30s when they fail after 10s
seems a bit... off)
2006-06-08 jrandom
* Be more conservative in the proactive rejections
2006-06-04 Complication
* Trim out sending a blank line before USER in susimail.
Seemed to break in rare cases, thanks for reporting, Brachtus!
* 2006-06-04 0.6.1.20 released
2006-06-04 jrandom
* Reduce the SSU ack frequency
* Tweaked the tunnel rejection settings to reject less aggressively
2006-05-31 jrandom
* Only send netDb searches to the floodfill peers for the time being
* Add some proof of concept filters for tunnel participation. By default,
it will skip peers with an advertised bandwith of less than 32KBps or
an advertised uptime of less than 2 hours. If this is sufficient, a
safer implementation of these filters will be implemented.
* 2006-05-18 0.6.1.19 released
2006-05-18 jrandom
* Made the SSU ACKs less frequent when possible
2006-05-17 Complication
* Fix some oversights in my previous changes:
adjust some loglevels, make a few statements less wasteful,
make one comparison less confusing and more likely to log unexpected values
2006-05-17 jrandom
* Make the peer page sortable
* SSU modifications to cut down on unnecessary connection failures
2006-05-16 jrandom
* Further shitlist randomizations
* Adjust the stats monitored for detecting cpu overload when dropping new
tunnel requests
2006-05-15 jrandom
* Add a load dependent throttle on the pending inbound tunnel request
backlog
* Increased the tunnel test failure slack before killing a tunnel
2006-05-13 Complication
* Separate growth factors for tunnel count and tunnel test time
* Reduce growth factors, so probabalistic throttle would activate
* Square probAccept values to decelerate stronger when far from average
* Create a bandwidth stat with approximately 15-second half life
* Make allowTunnel() check the 1-second bandwidth for overload
before doing allowance calculations using 15-second bandwidth
* Tweak the overload detector in BuildExecutor to be more sensitive
for rising edges, add ability to initiate tunnel drops
* Add a function to seek and drop the highest-rate participating tunnel,
keeping a fixed+random grace period between such drops.
It doesn't seem very effective, so disabled by default
("router.dropTunnelsOnOverload=true" to enable)
2006-05-11 jrandom
* PRNG bugfix (thanks cervantes and Complication!)
* 2006-05-09 0.6.1.18 released
2006-05-09 jrandom
* Further tunnel creation timeout revamp
2006-05-07 Complication
* Fix problem whereby repeated calls to allowed() would make
the 1-tunnel exception permit more than one concurrent build
2006-05-06 jrandom
* Readjust the tunnel creation timeouts to reject less but fail earlier,
while tracking the extended timeout events.
2006-05-04 jrandom
* Short circuit a highly congested part of the stat logging unless its
required (may or may not help with a synchronization issue reported by
andreas)
2006-05-03 Complication
* Allow a single build attempt to proceed despite 1-minute overload
only if the 1-second rate shows enough spare bandwidth
(e.g. overload has already eased)
2006-05-02 Complication
* Correct a misnamed property in SummaryHelper.java
to avoid confusion
* Make the maximum allowance of our own concurrent
tunnel builds slightly adaptive: one concurrent build per 6 KB/s
within the fixed range 2..10
* While overloaded, try to avoid completely choking our own build attempts,
instead prefer limiting them to 1
2006-05-01 jrandom
* Adjust the tunnel build timeouts to cut down on expirations, and
increased the SSU connection establishment retransmission rate to
something less glacial.
* For the first 5 minutes of uptime, be less aggressive with tunnel
exploration, opting for more reliable peers to start with.
2006-05-01 jrandom
* Fix for a netDb lookup race (thanks cervantes!)
2006-04-27 jrandom
* Avoid a race in the message reply registry (thanks cervantes!)
2006-04-27 jrandom
* Fixed the tunnel expiration desync code (thanks Complication!)
* 2006-04-23 0.6.1.17 released
2006-04-19 jrandom
* Adjust how we pick high capacity peers to allow the inclusion of fast
peers (the previous filter assumed an old usage pattern)
* New set of stats to help track per-packet-type bandwidth usage better
* Cut out the proactive tail drop from the SSU transport, for now
* Reduce the frequency of tunnel build attempts while we're saturated
* Don't drop tunnel requests as easily - prefer to explicitly reject them
* 2006-04-15 0.6.1.16 released
2006-04-15 jrandom
* Adjust the proactive tunnel request dropping so we will reject what we
can instead of dropping so much (but still dropping if we get too far
overloaded)
2006-04-14 jrandom
* 0 isn't very random
* Adjust the tunnel drop to be more reasonable
2006-04-14 jrandom
* -28.00230115311259 is not between 0 and 1 in any universe I know.
* Made the bw-related tunnel join throttle much simpler
2006-04-14 jrandom
* Make some more stats graphable, and allow some internal tweaking on the
tunnel pairing for creation and testing.
* 2006-04-13 0.6.1.15 released
2006-04-12 jrandom
* Added a further failsafe against trying to queue up too many messages to
a peer.
2006-04-12 jrandom
* Watch out for failed syndie index fetches (thanks bar!)
2006-04-11 jrandom
* Throttling improvements on SSU - throttle all transmissions to a peer
when we are retransmitting, not just retransmissions. Also, if
we're already retransmitting to a peer, probabalistically tail drop new
messages targetting that peer, based on the estimated wait time before
transmission.
* Fixed the rounding error in the inbound tunnel drop probability.
2006-04-10 jrandom
* Include a combined send/receive graph (good idea cervantes!)
* Proactively drop inbound tunnel requests probabalistically as the
estimated queue time approaches our limit, rather than letting them all
through up to that limit.
2006-04-08 jrandom
* Stat summarization fix (removing the occational holes in the jrobin
graphs)
2006-04-08 jrandom
* Process inbound tunnel requests more efficiently
* Proactively drop inbound tunnel requests if the queue before we'd
process it in is too long (dynamically adjusted by cpu load)
* Adjust the tunnel rejection throttle to reject requeusts when we have to
proactively drop too many requests.
* Display the number of pending inbound tunnel join requests on the router
console (as the "handle backlog")
* Include a few more stats in the default set of graphs
2006-04-06 jrandom
* Fix for a bug in the new irc ping/pong filter (thanks Complication!)
2006-04-06 jrandom
* Fixed a typo in the reply cleanup code
* 2006-04-05 0.6.1.14 released
2006-04-05 jrandom
* Cut down on the time that we allow a tunnel creation request to sit by
without response, and reject tunnel creation requests that are lagged
locally. Also switch to a bounded FIFO instead of a LIFO
* Threading tweaks for the message handling (thanks bar!)
* Don't add addresses to syndie with blank names (thanks Complication!)
* Further ban clearance
2006-04-05 jrandom
* Fix during the ssu handshake to avoid an unnecessary failure on
packet retransmission (thanks ripple!)
* Fix during the SSU handshake to use the negotiated session key asap,
rather than using the intro key for more than we should (thanks ripple!)
* Fixes to the message reply registry (thanks Complication!)
* More comprehensive syndie banning (for repeated pushes)
* Publish the router's ballpark bandwidth limit (w/in a power of 2), for
testing purposes
* Put a floor back on the capacity threshold, so too many failing peers
won't cause us to pick very bad peers (unless we have very few good
ones)
* Bugfix to cut down on peers using introducers unneessarily (thanks
Complication!)
* Reduced the default streaming lib message size to fit into a single
tunnel message, rather than require 5 tunnel messages to be transferred
without loss before recomposition. This reduces throughput, but should
increase reliability, at least for the time being.
* Misc small bugfixes in the router (thanks all!)
* More tweaking for Syndie's CSS (thanks Doubtful Salmon!)
2006-04-01 jrandom
* Take out the router watchdog's teeth (don't restart on leaseset failure)
* Filter the IRC ping/pong messages, as some clients send unsafe
information in them (thanks aardvax and dust!)
2006-03-30 jrandom
* Substantially reduced the lock contention in the message registry (a
major hotspot that can choke most threads). Also reworked the locking
so we don't need per-message timer events
* No need to have additional per-peer message clearing, as they are
either unregistered individually or expired.
* Include some of the more transient tunnel throttling
* 2006-03-26 0.6.1.13 released
2006-03-25 jrandom
* Added a simple purge and ban of syndie authors, shown as the
"Purge and ban" button on the addressbook for authors that are already
on the ignore list. All of their entries and metadata are deleted from
the archive, and the are transparently filtered from any remote
syndication (so no user on the syndie instance will pull any new posts
from them)
* More strict tunnel join throtting when congested
2006-03-24 jrandom
* Try to desync tunnel building near startup (thanks Complication!)
* If we are highly congested, fall back on only querying the floodfill
netDb peers, and only storing to those peers too
* Cleaned up the floodfill-only queries
2006-03-21 jrandom
* Avoid a very strange (unconfirmed) bug that people using the systray's
browser picker dialog could cause by disabling the GUI-based browser
picker.
* Cut down on subsequent streaming lib reset packets transmitted
* Use a larger MTU more often
* Allow netDb searches to query shitlisted peers, as the queries are
indirect.
* Add an option to disable non-floodfill netDb searches (non-floodfill
searches are used by default, but can be disabled by adding
netDb.floodfillOnly=true to the advanced config)
2006-03-20 jrandom
* Fix to allow for some slack when coalescing stats
* Workaround some oddball errors
2006-03-18 jrandom
* Added a new graphs.jsp page to show all of the stats being harvested
2006-03-18 jrandom
* Made the netDb search load limitations a little less stringent
* Add support for specifying the number of periods to be plotted on the
graphs - e.g. to plot only the last hour of a stat that is averaged at
the 60 second period, add &periodCount=60
2006-03-17 jrandom
* Add support for graphing the event count as well as the average stat
value (done by adding &showEvents=true to the URL). Also supports
hiding the legend (&hideLegend=true), the grid (&hideGrid=true), and
the title (&hideTitle=true).
* Removed an unnecessary arbitrary filter on the profile organizer so we
can pick high capacity and fast peers more appropriately
2006-03-16 jrandom
* Integrate basic hooks for jrobin (http://jrobin.org) into the router
console. Selected stats can be harvested automatically and fed into
in-memory RRD databases, and those databases can be served up either as
PNG images or as RRDtool compatible XML dumps (see oldstats.jsp for
details). A base set of stats are harvested by default, but an
alternate list can be specified by setting the 'stat.summaries' list on
the advanced config. For instance:
stat.summaries=bw.recvRate.60000,bw.sendRate.60000
* HTML tweaking for the general config page (thanks void!)
* Odd NPE fix (thanks Complication!)
2006-03-15 Complication
* Trim out an old, inactive IP second-guessing method
(thanks for spotting, Anonymous!)
2006-03-15 jrandom
* Further stat cleanup
* Keep track of how many peers we are actively trying to communicate with,
beyond those who are just trying to communicate with us.
* Further router tunnel participation throttle revisions to avoid spurious
rejections
* Rate stat display cleanup (thanks ripple!)
* Don't even try to send messages that have been queued too long
2006-03-05 zzz
* Remove the +++--- from the logs on i2psnark startup
2006-03-05 jrandom
* HTML fixes in Syndie to work better with opera (thanks shaklen!)
* Give netDb lookups to floodfill peers more time, as they are much more
likely to succeed (thereby cutting down on the unnecessary netDb
searches outside the floodfill set)
* Fix to the SSU IP detection code so we won't use introducers when we
don't need them (thanks Complication!)
* Add a brief shitlist to i2psnark so it doesn't keep on trying to reach
peers given to it
* Don't let netDb searches wander across too many peers
* Don't use the 1s bandwidth usage in the tunnel participation throttle,
as its too volatile to have much meaning.
* Don't bork if a Syndie post is missing an entry.sml
2006-03-05 Complication
* Reduce exposed statistical information,
to make build and uptime tracking more expensive
2006-03-04 Complication
* Fix the announce URL of orion's tracker in Snark sources
2006-03-03 Complication
* Explicit check for an index out of bounds exception while parsing
an inbound IRC command (implicit check was there already)
2006-03-01 jrandom
* More aggressive tunnel throttling as we approach our bandwidth limit,
and throttle based off periods wider than 1 second.
* Included Doubtful Salmon's syndie stylings (thanks!)
2006-02-27 zzz
* Update error page templates to add \r, Connection: close, and
Proxy-connection: close to headers.
* 2006-02-27 0.6.1.12 released

View File

@@ -1,6 +1,9 @@
; TC's hosts.txt guaranteed freshness
; $Id: hosts.txt,v 1.166 2006/01/23 10:29:36 cervantes Exp $
; $Id: hosts.txt,v 1.169 2006-12-16 17:31:07 jrandom Exp $
; changelog:
; (1.191) added trac.i2p
; (1.190) added archive.syndie.i2p
; (1.189) added mtn.i2p
; (1.188) added downloads.legion.i2p, politguy.i2p, ninja.i2p
; (1.187) added hidden.i2p, bk1k.i2p, antipiracyagency.i2p
; (1.186) added decadence.i2p, freedomarchives.i2p, closedshop.i2p
@@ -498,4 +501,7 @@ antipiracyagency.i2p=lnOKgQBEcsbZHgsuN5rJQA5mXK59fWPoCtag-EGfgYkbO1YbbPAqUZHqF4a
downloads.legion.i2p=p0eQqZscgqFvpoQtftNXFVRJpNzMkW1gx0cEmVz5xcpT195DxoVEwGlQ34mP4Da5nnUggcCaHzW9JBduqxiZU73quiO6VYeE65b70EhS0mRgsoVaU9-nsqo7ikYZ0-Rr6Qrhn32M6vktf4b2KljmpHgaBnJzMsFiMaEj3QuxGY8Q4tH6P34tgKiv2hYzZ8DGCj9bcmzzW0LX8XwA9tufi4XGM31qAZu~CiW14J5I8AwDUQRnyaWiZ8OzN~o3qTbkPyMAfXnAewcoA~GJPF9oAb-lHdESGeSEE38Krk2OYv3gQNUTiZcVaEck3VktqFmQiHCWDtAO0z8DIv9qmSjCI41w0MTCVlXNPRDO-YCE9JHlZv5zvS1~uCKltJwKGAHxHv~8N4oKMjJiB52XHXo3sWk503NTF-OK29ng~T4qfDQgEL6mm~jFhF3X-aruSkbn3OcRhWEgSJ2YdQYERnSwZn8gg4k78kvx02reisAHBN04UKa9YW95~Tz4S5Mqn8lrAAAA
politguy.i2p=nyQ37u3uP7Zm5YoFXL4MFEjJH6iQhrMs0K~Bk57fEIYRGJIG31~tqScmDLOUPjxiwzHEsbbLJo-o9rll7744HMOHmlfNGegKMO-n7~pWCVRW2XtMQL17NnZl97bPiDdymTVJoAKrs2ZfEayDYiiBxKbUlbcnT51O0~aTymSmlyh1i0VQajbQOveEdN-QDiOWEdBLZCQx7lg4BXT78Sb76qQVj4c6jdYRxP~q1nL1XHyjp~qd0jjLFNeTqS8YPdEDJFWAKo7M1bIYV4SUcj40GzGIuWRnaSY-utizQFrRUC0NcdwZK7Mo1JivGC0TFtWFJsQ-xx5ix4vP4vYeQ8rGiyED32s1kwII9rCBu9DmLLTwGBjTX0yjXI4XEH2Q-C0fgIw6YniDYQj50a3q1KTkaqiXm~EtnWqkv9dGvSYY8-mfBlRKD8k1X83-MhycdZHNrsWhGxA~zpBmKcMVHD0CDbW2ZF6sHyS3wPZiQ2haeBJL6mm7ZWQT6J875w3SP-nnAAAA
ninja.i2p=XhyN8MgL6DwWE98g2DJlDLHy~02g58IF14LTb472cmSLRj~cHv7wyi1mxy21vtf12oPWRquwMqpn4vJbKtnoQrLqRBgKAWtHgvUWOKPA8HqOmP4g9di3vGLefUNHUjkZWPXANBs-HwWH~~zPV0zwjN5keEZLVNQKfUkc0-qJ77zfxrDAPVjUBlyx1nlH7HcSxgzXS8qNlQuxWEqlqOiPushnalVFZSul-kVzG0Zo1PIQQHDKA1kpmwr2EX5xvCq0PgDZgj~xc3xkTzBMXC-m~NnmB1NRrH80lpY3nWMo-ySTVS7KcHH--zPUkvQhE2PCjLcSQsqpK2xsq0XEQOeAqoeDhosc3XxQDnDTmgBC0Bnus4N6RMBZgBRpniFT4ofwgGmFS1nDGkI-F4fLJSQ4f~P4sUR6HCZnwWIEvcr7lJgr26RP584nev9Dzhr4dq1hfTesShfluB19KJ17eBsimy1xIGDbiAeCSVtUjEviyb1lm7C~we3c0oCUUt2NvOGWAAAA
mtn.i2p=RqXC4xbFK6t3g2wk9SO4RjY7mj1c0DmtMra5c1Md8t-DcNPSjQFmqT97pcZ5IR1JDKqyCO7RI~aATTTMPQexoEeqK9-6Poeh2RA1C81FzcA9sHvjLeg3eB1Cju-sE-IDeFntEvCC4w7wWnpmLzCfdXK7OjSK1wYc6OkqPOLVDEy-N-4UUPlZFWWUghpjBGXGayXz6JRKtoMIwhFQaiKdRvAs32ozM9RM0NWzrCaRLZBIQ6Gg1Ys1wF0-oBJgC4T4CN6SxJNaz9Dfw4GNtPyD6lq3S1osY1ccflm3itvUt3JC1J9ypoXzylBE5MuS-LTgbgbMdMFty07AoWB~EY8TwW8EQQO07GSzB7hm53u0iCEH44GexhKHtQP-hYbIr3mklo89BvfWIRGMTwUkAzYojzC-vOyIh16LWrQQhGbLvByKQSdWk9nInm3GEfqRVtSpuqd4m6iHzrBDZ3fvKjbuywot3hDNHitOHOedmBNA8neCzLkod8b0Z4xx~qRIEObxAAAA
archive.syndie.i2p=iXX0DadZTJQpPr1to0OmQ4xokHgx1HYd5ec7~zIjQ80W~p4kRCYJmEzibH2Kn59Gi04SAXeA2O9i3bNqfGCQjsbz7UcjPGrW6-UrckXVXW67Moxp7QWY6i-aKuVYM3bqYxUL2mWvcDzJ8D0ecMpvasxhxwXpdFn2J6CGboMxeGV8R3hwwlNYbYoKgHr74qEJaIZpm1FrRWvNHV5cMv363iWnPy72XspQefk79-VOjPsxfummosU7gqlxl5teyiGKNzMs3G6iJyfVHO8IlKtdn~P~ET9p7zWlTPgV8NTyCVB-Wn5S3JMkMGOFZR7wSlxSwGFpTFQKc7mxVTtLZ5nWcV2OhvOIxRZ31RvGJZyVs562RC5aMfyqcM5IHQiZVlmkhzJKIy9VDw8tKayQtRM-xeN5k6Qr7iMmYIRORwuAODkYApoMD9a0eJ6ZYOSgBMOCSvYcwfT8axRY~GabiHm0QC82mo-nDgrUypGKtOPMI9MIqMTsb8Yl-UGWn6twBAIzAAAA
trac.i2p=OBnF9NtkEsPij2F-lp3bWDVrJsPQQPdq6adlpq0N4BY1XRjtDBZl~EpDdk7roq49~ptKAQG2cNUeBEKIIrdlZhJio5pMwUl6YinizzkNTFfZipB5OKoB7PBulxkw-N9mKMhS1btd9ajcV8tiP3xiv7VSlgiDwbdKg1fmkvNrVrJnzkN3-ey2kebYnbh7jjU2gPFUl~CwSEkIi6AK9EfqmFR-DUVohyygqAY~fi4EMeTVXGUqftXSNFYUwpRJgFrWRPTurtZnJK5403q67oEk0eWrPIZ8ytJWSBfffAXL3ts~0O1FZeKXUccsAl33j70~lklSolNVLJ40y-6X5ZLWajmX0ONU3j0qI5A~7fgNgsg-vKypPDuzl8ug-D~BmhqdAf0sRYmziDVwTgU~WRB6IzhhXFR6CbwrGXdgOGg2qNT1eOnMwGo3SMMJ7kK88VC5LdYg2dyiyjZATuvT92QdZglrVQIeBqAehcFjOBuycC1ED3AOak8D9Xplj7V6hN-HAAAA

View File

@@ -1,5 +1,5 @@
<i2p.news date="$Date: 2006/02/21 10:20:21 $">
<i2p.release version="0.6.1.12" date="2006/02/27" minVersion="0.6"
<i2p.news date="$Date: 2006-10-09 00:10:22 $">
<i2p.release version="0.6.1.27" date="2007/02/15" minVersion="0.6"
anonurl="http://i2p/NF2RLVUxVulR3IqK0sGJR0dHQcGXAzwa6rEO4WAWYXOHw-DoZhKnlbf1nzHXwMEJoex5nFTyiNMqxJMWlY54cvU~UenZdkyQQeUSBZXyuSweflUXFqKN-y8xIoK2w9Ylq1k8IcrAFDsITyOzjUKoOPfVq34rKNDo7fYyis4kT5bAHy~2N1EVMs34pi2RFabATIOBk38Qhab57Umpa6yEoE~rbyR~suDRvD7gjBvBiIKFqhFueXsR2uSrPB-yzwAGofTXuklofK3DdKspciclTVzqbDjsk5UXfu2nTrC1agkhLyqlOfjhyqC~t1IXm-Vs2o7911k7KKLGjB4lmH508YJ7G9fLAUyjuB-wwwhejoWqvg7oWvqo4oIok8LG6ECR71C3dzCvIjY2QcrhoaazA9G4zcGMm6NKND-H4XY6tUWhpB~5GefB3YczOqMbHq4wi0O9MzBFrOJEOs3X4hwboKWANf7DT5PZKJZ5KorQPsYRSq0E3wSOsFCSsdVCKUGsAAAA/i2p/i2pupdate.sud"
publicurl="http://dev.i2p.net/i2p/i2pupdate.sud"
anonannouncement="http://i2p/NF2RLVUxVulR3IqK0sGJR0dHQcGXAzwa6rEO4WAWYXOHw-DoZhKnlbf1nzHXwMEJoex5nFTyiNMqxJMWlY54cvU~UenZdkyQQeUSBZXyuSweflUXFqKN-y8xIoK2w9Ylq1k8IcrAFDsITyOzjUKoOPfVq34rKNDo7fYyis4kT5bAHy~2N1EVMs34pi2RFabATIOBk38Qhab57Umpa6yEoE~rbyR~suDRvD7gjBvBiIKFqhFueXsR2uSrPB-yzwAGofTXuklofK3DdKspciclTVzqbDjsk5UXfu2nTrC1agkhLyqlOfjhyqC~t1IXm-Vs2o7911k7KKLGjB4lmH508YJ7G9fLAUyjuB-wwwhejoWqvg7oWvqo4oIok8LG6ECR71C3dzCvIjY2QcrhoaazA9G4zcGMm6NKND-H4XY6tUWhpB~5GefB3YczOqMbHq4wi0O9MzBFrOJEOs3X4hwboKWANf7DT5PZKJZ5KorQPsYRSq0E3wSOsFCSsdVCKUGsAAAA/pipermail/i2p/2005-September/000878.html"

View File

@@ -4,7 +4,7 @@
<info>
<appname>i2p</appname>
<appversion>0.6.1.12</appversion>
<appversion>0.6.1.27</appversion>
<authors>
<author name="I2P" email="support@i2p.net"/>
</authors>

View File

@@ -1,7 +1,9 @@
HTTP/1.1 409 Conflict
Content-Type: text/html; charset=iso-8859-1
Cache-control: no-cache
HTTP/1.1 409 Conflict
Content-Type: text/html; charset=iso-8859-1
Cache-control: no-cache
Connection: close
Proxy-Connection: close
<html><head>
<title>Destination key conflict</title>
<style type='text/css'>

View File

@@ -1,7 +1,9 @@
HTTP/1.1 504 Gateway Timeout
Content-Type: text/html; charset=iso-8859-1
Cache-control: no-cache
HTTP/1.1 504 Gateway Timeout
Content-Type: text/html; charset=iso-8859-1
Cache-control: no-cache
Connection: close
Proxy-Connection: close
<html><head>
<title>Eepsite not reachable</title>
<style type='text/css'>

View File

@@ -1,7 +1,9 @@
HTTP/1.1 400 Destination Not Found
Content-Type: text/html; charset=iso-8859-1
Cache-control: no-cache
HTTP/1.1 400 Destination Not Found
Content-Type: text/html; charset=iso-8859-1
Cache-control: no-cache
Connection: close
Proxy-Connection: close
<html><head>
<title>Invalid eepsite destination</title>
<style type='text/css'>

View File

@@ -1,7 +1,9 @@
HTTP/1.1 404 Domain Not Found
Content-Type: text/html; charset=iso-8859-1
Cache-control: no-cache
HTTP/1.1 404 Domain Not Found
Content-Type: text/html; charset=iso-8859-1
Cache-control: no-cache
Connection: close
Proxy-Connection: close
<html><head>
<title>Eepsite unknown</title>
<style type='text/css'>

View File

@@ -1,7 +1,9 @@
HTTP/1.1 504 Gateway Timeout
Content-Type: text/html; charset=iso-8859-1
Cache-control: no-cache
HTTP/1.1 504 Gateway Timeout
Content-Type: text/html; charset=iso-8859-1
Cache-control: no-cache
Connection: close
Proxy-Connection: close
<html><head>
<title>Outproxy Not Found</title>
<style type='text/css'>

View File

@@ -9,13 +9,6 @@
and they'll be reachable. The 'key' to your eepsite that you need to
give to other people is shown on the eepsite's I2PTunnel
<a href="http://localhost:7657/i2ptunnel/edit.jsp?tunnel=3">configuration page</a>). </p>
<p>
If you have any standard java web applications (.war files) such as
<a href="http://wiki.blojsom.com/wiki/display/blojsom/About%2Bblojsom">blojsom</a>
or <a href="http://snipsnap.org/space/start">SnipSnap</a>, simply drop their .war
file into ./eepsite/webapps/ and they'll be reachable at
http://$yourEeepsite/warFileName/</p>
<p>You can also reach your eepsite locally through
<a href="http://localhost:7658/">http://localhost:7658/</a>. If you
want to change the port number, edit the file ./eepsite/jetty.xml and

Some files were not shown because too many files have changed in this diff Show More