Compare commits

..

1310 Commits

Author SHA1 Message Date
bf51741134 * Update versions, package release 2008-08-24 10:28:57 +00:00
zzz
33e8abfc3e * Persistent data store: Increase write limit from 300 to 600
so floodfill routers don't get backed up
2008-08-20 15:02:56 +00:00
zzz
258d01f0d9 * Blocklists: Handle blank lines and \r\n in blocklist.txt
* NTCP: Add connection limit, set by i2np.ntcp.maxConnections,
      default is 500 (very high for now)
2008-08-20 14:58:45 +00:00
zzz
49af13a3ca * i2psnark: Fix OOM vulnerability by checking incoming message length
(thanks devzero!)
2008-08-13 15:59:16 +00:00
zzz
719ba3f66f * Floodfill Peer Selector:
- Avoid peers whose netdb is old, or have a recent failed store,
        or are forever-shitlisted
2008-08-04 19:31:11 +00:00
zzz
9652db9623 * Blocklists:
- New, disabled by default, except for blocking of
        forever-shitlisted peers. See source for instructions
        and file format.
    * Transport - Reject peers from inbound connections:
      - Check IP against blocklist
      - Check router hash against forever-shitlist, then block IP
2008-07-30 03:59:18 +00:00
zzz
481af00bab -9 2008-07-16 15:05:56 +00:00
zzz
11d267bc9a * configpeer.jsp: New 2008-07-16 15:05:07 +00:00
zzz
40f0cb65a1 * SSU:
Don't proactively reconnect until 30m idle, so
        we don't lose introducer tags prematurely
2008-07-16 15:04:02 +00:00
zzz
2ba9929277 * PRNG: Move logging from wrapper to router log 2008-07-16 15:03:00 +00:00
zzz
616abba328 * i2psnark: Open completed files read-only the first time 2008-07-16 15:02:20 +00:00
14a6352d9a Corrected UTF-8 encoding 2008-07-16 14:44:17 +00:00
5782c42d25 Cleaned up all 'imports' in all applications, core and router. 2008-07-16 13:42:54 +00:00
dev
f261deaf16 made code more 1.5 compatible 2008-07-14 15:05:26 +00:00
zzz
5228543236 * SSU:
- Try to pick better introducers by checking shitlist,
        wasUnreachable list, failing list, and idle times
      - To keep introducer connections up and valid,
        periodically send a "ping" (a data packet with no data and no acks)
        to everybody that has been an introducer in the last two hours
      - Add a stat udp.receiveRelayRequestBadTag, make udp.receiveRelayRequest only for good ones
      - Remove some 60s and 5m stats, leave only the 10m ones
      - Narrow the range for the retransmit time after an allocation fail
      - Adjust some logging
2008-07-07 14:18:38 +00:00
zzz
e173a47e01 * Streaming lib - adjust some loggging, cleanup Connection.toString() 2008-07-07 14:18:15 +00:00
zzz
07b895a069 * Router console: Flag placeholder pages as noncacheable 2008-07-07 14:10:10 +00:00
zzz
4d8ffc85e2 * LoadTestManager: Don't instantiate, it's disabled 2008-07-07 14:09:16 +00:00
zzz
53e2e0d1c9 * KeyManager:
- Don't write router key backup when leaseSet keys are updated
      - Synchronize to prevent concurrent writes (thanks Galen!)
      - Backup keys every 7 days instead of every 5 minutes
2008-07-07 14:07:59 +00:00
zzz
e0dcf82697 * HTTP Proxy: Don't show jump links for unknown jump hosts 2008-07-07 14:07:14 +00:00
zzz
0cfac58adb * i2psnark:
- Repair corrupted files with wrong length rather than die
      - Register shutdown hook to properly shutdown torrents when
        the router shuts down, hopefully will reduce corruption
      - Add Galen tracker
      - Add a note about how to chane directory
2008-07-07 14:05:54 +00:00
zzz
2768bef991 * NTCP:
- Try to fix 100% CPU, caused perhaps by JVM NIO bug...
      - Fix failsafe stats
2008-06-30 03:14:32 +00:00
zzz
bae712ad96 * i2psnark:
- Fix NPE caused by race (thanks echelon!)
      - Add mastertracker, remove de-ebook
2008-06-30 03:09:20 +00:00
zzz
49cb4c13b3 * PersistentDataStore: More leaseSet code cleanup 2008-06-30 03:08:16 +00:00
zzz
28da17276c * configstats.jsp: Fix NPE when no stats checked (thanks nothome27!) 2008-06-30 03:07:52 +00:00
zzz
14099ace69 * SimpleTimer: Change congestion message from error to warn 2008-06-30 03:07:00 +00:00
zzz
9289799c97 * FloodfillMonitorJob: Change range from 5-7 to 4-6 2008-06-24 14:41:41 +00:00
zzz
f057666ac2 * PersistentDataStore: Don't try to remove nonexistent leaseSet files 2008-06-24 14:39:14 +00:00
zzz
a11b74b2d8 * NTCP: Remove getIsInbound(), duplicate of isInbound() 2008-06-24 14:38:09 +00:00
zzz
0e018c5b4d * Router console: add placeholder pages for i2psnark, i2ptunnel,
susidns, and susimail for use when the .wars are not running
2008-06-24 14:36:39 +00:00
zzz
107a90fa33 increase max window size to 128 2008-06-24 14:33:30 +00:00
dev
01259cc07d merge of '5c7631359fea237f6aa916acd4f76a8a00d519fb'
and '88d299913d4aeb0ef7e8adabb5c39255e9cca0d2'
2008-06-22 14:08:06 +00:00
dev
4d955f3be5 minor optimization in I2PDatagramDissector(only verfy signature once) 2008-06-22 14:07:30 +00:00
zzz
49e429c166 * PRNG: Add two stats
* Summary bar:
      - Display Warning for TCP private IP address
      - Display PRNG stats
2008-06-20 20:22:38 +00:00
zzz
53c5b1446a * OutNetMessage: Change cache logging from WARN to INFO 2008-06-20 20:21:21 +00:00
zzz
dc68ebbaeb * configclients.jsp: Add start button for clients and webapps. 2008-06-20 20:20:50 +00:00
zzz
f3d73a6c15 * configclients.jsp: Implement saves for clients and webapps. 2008-06-17 13:48:41 +00:00
zzz
91950a37f5 * Comm System: Add new STATUS_HOSED for use when UDP bind fails
* Summary bar: Display helpful errror message when UDP bind fails
    * UDP: Don't bid when UDP bind fails
2008-06-17 13:47:54 +00:00
zzz
a8c266402e * configclients.jsp: New. For both clients and webapps.
Saves are not yet implemented.
2008-06-16 12:31:14 +00:00
zzz
d78fb4df3c * RouterConsoleRunner: Use a new config file, webapps.config,
to control which .wars in webapps/ get run. Apps are enabled
      by default; disable by (e.g.) webapps.syndie.startOnLoad=false
      Config file is written if it does not exist.
      Implement methods for use by upcoming configclients.jsp.
2008-06-16 12:26:36 +00:00
zzz
fb5a8ee0d8 * Refactor LoadClientAppsJob.java, move some functions to new
ClientAppConfig.java, to make them easily available to
      a future configclients.jsp
2008-06-16 12:19:55 +00:00
zzz
7b81062816 * UDP: Prevent 100% CPU when UDP bind fails;
change bind fail message from ERROR to CRIT
2008-06-16 12:18:43 +00:00
zzz
c3a2adc97e minor updates 2008-06-13 17:19:42 +00:00
zzz
bff685f7ca * Throttle: Use BANDWIDTH rather than CRIT as the rejection reason at
startup, so peers don't list us as failing.
2008-06-10 14:05:55 +00:00
zzz
7e51c86c38 * Floodfill: Add new FloodfillMonitorJob, which tracks active
floodfills, and automatically enables/disables floodfill on
      Class O routers to maintain 5-7 total active floodfills
2008-06-10 13:37:27 +00:00
zzz
df069ec9d1 * NetDb Stats:
- Remove several more stats
      - Don't publish bw stats in first hour of uptime
      - Publish floodfill stats even if other stats are disabled
      - Changes not effective until 0.6.2.1 to provide cover.
2008-06-10 13:28:13 +00:00
zzz
eb3164d0e0 * graphs.jsp: Fix a bug where it tries to display the combined
bandwidth graph when it isn't available
2008-06-10 13:17:28 +00:00
zzz
5a69de3650 0.6.2-1 2008-06-09 14:12:09 +00:00
zzz
87b933fd3a propagate from branch 'i2p.i2p.i2p-0.6.2.1-pre' (head 8b23a248995e5c57ccef1c2620a47929f4b257cf)
to branch 'i2p.i2p' (head f65d1f225d8700ea812e1c3cbc0ee9e7a5bbaf98)
2008-06-09 14:00:50 +00:00
zzz
2404078bfa 2008-06-09 zzz
* Reachability: Restrict peers with no SSU address at all from inbound tunnels
    * News:
      - Add display of last updated and last checked time
        on index.jsp and configupdate.jsp
      - Add a function to get update version (unused for now)
    * config.jsp: Add another warning
2008-06-09 13:14:52 +00:00
zzz
6b33378a0a fix date in the xml 2008-06-08 17:37:13 +00:00
c46c9b2b7c * Fix version in news.xml so it could be published 2008-06-07 21:46:33 +00:00
acf22bf8fc * Write announcement and prepare for release 2008-06-07 20:34:07 +00:00
zzz
f3b8c73e96 * NetDb: Tweak some logging on lease problems
* Shitlist:
      - Add shitlistForever() and isShitlistedForever(), unused for now
      - Sort the HTML output by router hash
    * config.jsp: Add another warning
    * netdb.jsp:
      - Sort the lease HTML output by dest hash, local first
      - Sort the router HTML output by router hash
2008-06-07 17:44:13 +00:00
zzz
a8e625072b add de-ebook-archiv.i2p 2008-06-06 21:36:34 +00:00
zzz
88e26224c2 * LeaseSet:
- Sort the leases by expiration date in TunnelPool.locked_buildNewLeaseSet()
        to make later LeaseSet comparisons reliable. This cleans up the code too.
      - Fix broken old vs. new LeaseSet comparison
        in ClientConnectionRunner.requestLeaseSet(),
        so that we only sign and publish a new LeaseSet when it's really new.
        Should reduce outbound overhead both in LeaseSet publishing and LeaseSet bundling,
        and floodfill router load, since locked_buildNewLeaseSet() generates
        the same LeaseSet as before quite frequently, often just seconds apart.
2008-06-06 16:17:07 +00:00
zzz
db9db18bdf * LeaseSet - code cleanup:
- Add exception to enforce max # of leases = 6, should be plenty
      - Rewrite TunnelPool.locked_buildNewLeaseSet() so it doesn't add lots of
        leases and then immediately remove them again, triggering
        the new leaseSet size exception
      - Remove the now unused LeaseSet.removeLease(lease) and
        LeaseSet.removeLease(index)
      - Store first and last expiration for efficiency
2008-06-05 12:30:12 +00:00
zzz
9c06bb3fca * HTTP Proxy error pages: Don't say eepsites are 'temporarily' down since we don't know 2008-06-05 00:46:24 +00:00
zzz
8edfa746e5 * configtunnels.jsp: Add warnings for <= 0 and >= 4 hop configurations 2008-06-04 16:03:34 +00:00
zzz
5729b34f8b * Peer Profiles - Preparation for using bonuses:
- Use CapacityBonus rather than ReachablilityBonus in the Capacity calculation
      - Persist CapacityBonus rather than ReachabilityBonus
      - Include SpeedBonus in the Speed calculation
      - Prevent negative values in Speed and Capacity when using bonuses
      - Clean up SpeedCalculator.java
2008-06-04 15:06:55 +00:00
zzz
2f80f7fa63 Add some config files for a future small distribution 2008-06-04 14:54:05 +00:00
zzz
592e609291 .33-2001 2008-06-01 20:55:32 +00:00
zzz
3396a8813f * Add some compiler flexibility to two obscure SAM makefiles 2008-06-01 20:50:29 +00:00
zzz
6345e669bc * summary bar: Add a warning if you are firewalled and class O 2008-06-01 20:43:51 +00:00
zzz
74a5abbc11 * ProfileOrganizer: Restrict !isSelectable() (i.e. shitlisted) peers from the High Capacity tier,
not just the Fast tier, since we don't use them for tunnels anyway
2008-06-01 20:34:20 +00:00
zzz
19992b1d1b * Logging: Move common WARN output to DEBUG so we can ask users to
set the default log level to WARN without massive spewage
2008-06-01 20:24:43 +00:00
zzz
02e7a19f65 * i2psnark: Change displayed peer idents to match that shown by bytemonsoon 2008-06-01 20:17:48 +00:00
zzz
c6a697df57 * Client Apps: Add new parameter for clients.config,
clientApp.x.startOnLoad=false, to disable loading
        (for SAM for example). Defaults to true of course.
2008-06-01 20:16:17 +00:00
zzz
26bb479957 * summary bar: Hide ident, provide a tooltip and a link 2008-06-01 20:14:00 +00:00
zzz
0c42e7e4b2 make a nicer initialNews.xml, add clarification to config.jsp 2008-05-30 00:08:29 +00:00
zzz
699a62a9b9 Dont bid on private IP addresses in transports 2008-05-27 13:20:56 +00:00
zzz
ffc67d1e5a add a updaterRouter target, containing only i2p.jar and router.jar 2008-05-26 14:39:39 +00:00
zzz
2f72f5ca67 * Throttle: Set a default router.maxParticipatingTunnels = 3000 (was none)
* Stats: Add a fake uptime if not publishing stats, to get participating tunnels
    * build.xml:
      - Add an updateSmall target which includes only the essentials
      - Clean up the build file some 
      - Remove empty eepsite/ and subdirs from i2pupdate.zip
    * configtunnels.jsp: Add warning
    * i2psnark: Catch a bencode exception (bad peer from tracker) earlier
    * i2psnark-standalone: Fix exception http://forum.i2p/viewtopic.php?p=12217
2008-05-26 13:13:26 +00:00
dev
955e7823ad replaced jetty download-url 2008-05-22 20:52:56 +00:00
zzz
7e3800a5cb * Reachability:
- Call the previously unused profile.tunnelTestFailed()
        (redefined to include a probability argument)
        and severely downgrade a peer's capacity upon failures,
        depending on tunnel length and direction.
        This will help push unreachable and malicious peers
        out of the High Capacity tier.
      - Put recent fail rate on profiles.jsp
    * ProfileOrganizer: Logging cleanup
    * eepsite_index.html: Update add-host and jump links
    * HTTP Proxy: Remove trevorreznik jump server from list
2008-05-20 12:48:41 +00:00
dev
6c7691cecb updated history 2008-05-20 11:31:52 +00:00
dev
760c316486 merge of '883c453307272eee439471d4e9da1e120804dac1'
and 'c60598471c5f08b7d7e12e38d39f7a1d4c8ccf63'
2008-05-20 11:30:41 +00:00
dev
5d9d82879f implemented PrivateKeyFile(implements #3) 2008-05-20 11:30:03 +00:00
zzz
9b8772a470 * Throttle: Reject tunnels for first 20m uptime (was 10m)
* TunnelPeerSelectors:
       - Re-enable strict ordering of peers,
         based on XOR distance from a random hash
       - Restrict peers with uptime < 90m from tunnels (was 2h),
         which is really 60m due to rounding in netDb publishing.
    * i2psnark:
       - Limit max pipelined requests from a single peer to 128KB
         (was unlimited; i2p-bt default is 5 * 64KB)
       - Increase max uploaders per torrent to 6 (was 4)
       - Reduce max connections per torrent to 16 (was 24) to increase
         unchoke time and reduce memory consumption
       - Strictly enforce max connections per torrent
       - Choke more gradually when over BW limit
    * help.jsp: Add a link to the FAQ
    * peers.jsp: Fix UDP direction indicators
    * hosts.txt: Add update.postman.i2p
2008-05-18 21:45:54 +00:00
zzz
bc5d87e6f0 * i2psnark:
- Randomize the PeerCheckerTask start times to make global limiting
        work better
      - Calculate bw limits using 40s rather than 4m averages to make
        bw limiting work better
      - Change default bw limit from uplimit/3 to uplimit/2 due to
        overhead reduction from the leaseset bundling change
2008-05-12 13:53:11 +00:00
zzz
d81bff267a * Outbound message:
- Tweak the cache key for efficiency
2008-05-12 13:50:15 +00:00
zzz
042399f293 * Stats:
- Require two udp stats when stats.full=false, caused NPE on peers.jsp
2008-05-12 13:48:41 +00:00
zzz
5be7ea1fc5 * libjbigi:
- Add documentation on dynamic build option
      - Add two speed tests to the build script
      - Clean up the build script, make it easier to build dynamic
2008-05-12 13:47:15 +00:00
zzz
db34665bb1 * Update Handler:
- Add postman to the list
2008-05-12 13:46:23 +00:00
zzz
ed9a03ebc7 * Summary bar:
- Add messages when dropping tunnel requests due to load
2008-05-12 13:45:08 +00:00
zzz
619b5c0e45 * Update Handler:
- Add option to download and verify only
      - Add distinct error message if version check fails
2008-05-10 14:31:18 +00:00
zzz
4c2c5ca232 Simplify oldstats.jsp if no events in a stat 2008-05-10 14:28:37 +00:00
zzz
3a203c3018 Fix the hosed inNetPool.droppedDeliveryStatusDelay stat (caused by an SSU hack) 2008-05-10 14:27:49 +00:00
zzz
3e86ee9746 Dont write out the my.info file 2008-05-10 14:26:24 +00:00
dev
d0855e1fc6 added http://www.i2p2.i2p/_static/i2pupdate.sud as another update-site 2008-05-09 12:37:11 +00:00
zzz
0bde8a24e4 * Reachability:
- Restrict peers requiring introducers from inbound tunnels,
        since it's slow and unreliable... and many of them advertise
        NTCP, which seems unlikely to work
      - Provide warning on summary bar if firewalled with inbound NTCP enabled
    * Stats: Remove the bw.[send,recv]Bps[1,15]s stats unless
      log level net.i2p.router.transport.FIFOBandwidthLimiter >= WARN
      at startup (you didn't get any data unless you set the log level anyway)
    * oldstats.jsp: Don't put 2 decimal places on integer event counts
    * Remove the Internals link from the menu bar
    * i2psnark: Extend startup delay from 1 to 3 minutes
2008-05-07 16:23:54 +00:00
dev
4049ff5167 -2 2008-05-06 20:04:26 +00:00
dev
89389ccc13 added i2jump.i2p as jump-site 2008-05-06 19:44:35 +00:00
zzz
959e308578 Add router version to profiles.jsp 2008-05-05 16:26:15 +00:00
zzz
8d4cbd8556 I2PTunnel: Change default outproxy to false.i2p 2008-05-05 14:42:17 +00:00
zzz
99b9c93636 -1 2008-05-05 14:10:19 +00:00
zzz
47c666c582 * Summary bar:
- Add reachability status
      - Add participating tunnel acceptance status
    * Throttle: Reject tunnels for first 10m uptime
2008-05-05 14:07:40 +00:00
zzz
a3c330fd9d - Restrict <= .32 SSU-only peers from inbound tunnels,
since they don't know if they are unreachable
2008-05-05 14:04:41 +00:00
zzz
a6f3478db3 * Outbound message:
- Fix a couple of tunnel cache cleaning bugs
      - Cache based on source+dest pairs rather than just dest
      - Send the reply leaseSet only when necessary,
        rather than all the time (big savings in overhead)
      - Enable persistent lease selection again
      - Logging tweaks
2008-05-05 14:01:22 +00:00
zzz
b1af22a15e - Have SSU bid aggressively when it has less than 3 peers, so
we can determine our IP address and do peer testing.
        Otherwise a router may never determine its IP address or reachability status.
2008-05-05 13:57:54 +00:00
zzz
65ec41c48f NetDb Stats: Cleanup of commented out stats 2008-05-05 13:55:50 +00:00
zzz
aebf16add7 0.6.1.33 version and news 2008-04-25 13:57:38 +00:00
zzz
1de0563c94 update forum link on susidns page 2008-04-25 13:22:41 +00:00
dev
c6059b9d85 merge of '20e06eb5e2ee39991682cae68511fca1944cd5b0'
and '8adc3b02e35ce3275e813ca9c64d75d02a0d9310'
2008-04-20 19:21:35 +00:00
dev
f68c9242a9 added news-entry for trac 2008-04-20 19:21:13 +00:00
zzz
51838ba051 * Outbound message/Reachability:
- Fix a bug from -19 causing the persistent lease selection
        removed in -17 to be back again
      - Use netDb-listed-unreachable instead of detected-unreachable
        for exclusion of unreachable peers from selected leases,
        as there are potential anonymity problems with using
        detected-unreachable
      - Tweak logging some more
    * NetDb stats: Remove a couple more including the inefficient stat_identities
2008-04-20 18:02:15 +00:00
zzz
cf50b7eac1 * Reachability:
- Track unreachable peers persistently
        (i.e. separately from shitlist, and not cleared when they contact us)
      - Exclude detected unreachable peers from inbound tunnels
      - Exclude detected unreachable peers from selected leases
      - Exclude detected unreachable floodfill peers from lookups
      - Show unreachable status on profiles.jsp
2008-04-17 18:59:15 +00:00
zzz
2edd84e088 * SSU/Reachability:
- Extend shitlist time from 4-8m to 40-60m
      - Add some shitlist logging
      - Don't shitlist twice when unreachable on all transports
      - Exclude netDb-listed unreachable peers from inbound tunnels;
        this won't help much since there are very few of these now
      - Remove 10s delay on inbound UDP connections used for the
        0.6.1.10 transition
      - Track and display UDP connection direction on peers.jsp
      - Show shitlist status in-line on profiles.jsp
2008-04-16 20:51:57 +00:00
zzz
5ba1e458c6 * SSU Reachability/PeerTestManager:
- Back out strict peer ordering until we fix SSU
      - Back out persistent lease selection until we fix SSU
      - Fix detection of UDP REJECT_UNSOLICITED by recording status on expiration
      - Increase known Charlie time to 10m; 3m wasn't enough
      - Don't continue retransmitting peer test if we know Charlie
      - Don't run multiple peer tests at once
      - Tighten test frequency range to 6.5-19.5m, was 0-26m
2008-04-15 18:27:43 +00:00
zzz
0b600669c7 * Addressbook: Disallow '.-' and '-.' in host names
* NTCP: Don't drop a connection unless both directions are idle;
            Fix idle time for outbound connections
    * Outbound message: Make sure cached lease is in current leaseSet
    * Stats: Put all NetworkDatabase stats in same group
    * TunnelPool: Stop building tunnels and leaseSets after client shutdown
    * i2psnark: Add locking to prevent two I2CP connections
2008-04-12 21:34:25 +00:00
dev
215eb14d38 no need anymore to set I2P-Home when using eepget 2008-04-07 18:54:48 +00:00
zzz
20ff2f5bdc Minor cleanups, -15 2008-04-07 17:42:54 +00:00
zzz
edc02e5efa Implement outbound bandwidth limiting for i2psnark 2008-04-07 17:41:58 +00:00
zzz
d57356f807 Reduce priority of participating traffic 2008-04-07 17:41:19 +00:00
zzz
a7a6c75ac5 * ExploratoryPeerSelector: Try NonFailing even more
* HostsTxtNamingService: Add reverse lookup support
    * Outbound message: Minor cleanup
    * i2psnark TrackerCLient: Minor cleanup
    * checklist.txt: Minor edit
    * hosts.txt: Add perv.i2p, false.i2p, mtn.i2p2.i2p
    * i2ptunnel.config: Change CVS client to mtn
    * netdb.jsp: Show leaseSet destinations using reverse lookup
    * profiles.jsp: First cut at showing floodfill data
2008-03-30 21:50:35 +00:00
zzz
b9e2def552 * Send messages for the same destination to the same inbound
lease to reduce out-of-order delivery.
    * ExploratoryPeerSelector: Back out the floodfill peer exclusion
      for now, as it prevents speed rating of those peers
2008-03-27 13:03:16 +00:00
zzz
4d7417401c * ReseedHandler: Support multiple urls,
add netdb.i2p2.de as a 2nd default
2008-03-26 20:15:47 +00:00
zzz
40a9e959e8 * i2psnark:
- Add support for secondary open trackers
      - Refactor and simplify the TrackerClient code
      - Add welterde's tracker to the default list
      - Don't have eepget retry announces
      - Slow down tracker contacts if they've failed for a while
      - Add some debug support showing connections (?p=2)
2008-03-25 21:54:54 +00:00
zzz
42bbb4a9ff * NewsFetcher: Fix bug causing fetch every 10m 2008-03-22 17:10:49 +00:00
zzz
9500a55532 * Tunnel Testing:
- Fix counting so it really takes 4 consecutive failures
        rather than 4 total to remove a tunnel
      - Credit or blame goes to the exploratory tunnel as well
        as the tunnel being tested
      - Adjust tunnel test timeout based on tunnel length
    * ExploratoryPeerSelector: Tweak logging
    * ProfileOrganizer: Adjust integration calculation again
    * build.xml: Add to help
    * checklist.txt: Tweak
    * readme.html: Fix forum links
    * netDb: Remove tunnel.testFailedTime
2008-03-22 13:07:38 +00:00
zzz
0556f15068 remove corrupt sex0r.i2p from hosts.txt 2008-03-21 15:23:10 +00:00
zzz
6e981874a5 * ExploratoryPeerSelector:
- Exclude floodfill peers
      - Tweak the HighCap vs. NonFailing decision
    * i2psnark: Increase retries for .torrent fetch
    * IRC Proxy: Prevent mIRC from sending an alternate DCC request
      containing an IP
    * readme.html: Reorder some items
    * Stats: Add some more required stats
    * Streaming lib: Fix slow start to be exponential growth,
      fix congestion avoidance to be linear growth.
      Should speed up local connections a lot, and remote
      connections a little.
2008-03-19 00:20:15 +00:00
zzz
8886d61caa forum news 2008-03-16 16:12:57 +00:00
zzz
b6fe81a982 Floodfill Search: Prefer heard-from, unfailing, unshitlisted floodfill peers 2008-03-14 22:53:18 +00:00
zzz
e7cdb965ba * ProfileOrganizer:
- Use more recent stats to calculate integrationory.txt
       - Show that fast peers are also high-capacity on profiles.jsp
    * readme.html: Update Syndie link
    * TunnelPool: Update comments
    * netDb: Report 1-2h uptime as 90m to further frustrate tracking,
      get rid of the 60s tunnel stats
      (effective as of .33 to provide cover)
2008-03-13 23:13:32 +00:00
zzz
4fa4357bf1 * Floodfill Search:
- Fix a bug that caused a single FloodfillOnlySearchJob
         instance to be run multiple times, with unpredictable
         results
       - Select ff peers randomly to improve reliability
       - Add some bulletproofing
2008-03-13 20:23:46 +00:00
zzz
46307c60d4 * ProfileOrganizer:
- Don't require a peer to be high-capacity to be
         well-integrated (not used for anything right now,
         but want to get it right for possible floodfill verification)
       - Don't fall back to median for high-capacity threshold
         if the mean is higher than the median, this prevents
         frequent large high-capacity counts
       - Fix high-capacity selector that picked one too many
    * Console: put well-integrated count back in the summary
2008-03-11 22:59:47 +00:00
zzz
e6a0c2f4f0 * EepGet: Fix byte count for bytesTransferred status listeners
(fixes command line status)
    * UpdateHandler:
       - Fix byte count display
       - Display final status on router console
       - Don't allow multiple update jobs to queue up
       - Increase max retries
       - Code cleanup
       - Don't show 'check for update' button when update in progress
       - Enhance error messages
2008-03-10 16:27:40 +00:00
zzz
cb41bf6023 NetDb: Comment out published netDb stats disabled for .32 2008-03-10 16:22:24 +00:00
zzz
d23c8a8331 -2 2008-03-09 14:55:44 +00:00
zzz
b1beb46ca2 propagate from branch 'i2p.i2p.i2p-0.6.1.33-pre' (head adbe93ae091c4ca78306ef94968a0c1d788e2c01)
to branch 'i2p.i2p' (head f541ec6c1ca7ffae49e31ee75559695d64152fa1)
2008-03-09 14:45:31 +00:00
6606c83cb2 2008-03-09 Complication
* Give the Jetty build file ability to ask permission
      before downloading the Jetty archive from the web,
      and to verify its SHA1 + MD5 hashes. Adjust the main build file
      in accordance with this change.
    * Improve the release checklist.
2008-03-08 20:37:45 +00:00
zzz
5998f5c9bd * ClientPeerSelector: Implement strict ordering of peers,
based on XOR distance from a random hash
      separately generated for each tunnel pool
2008-03-07 23:24:49 +00:00
zzz
cffcbe5f94 update news and version numbers to .32 2008-03-07 15:08:26 +00:00
zzz
0c75725f5e * ProfileOrganizer, TunnelPoolSettings, ClientPeerSelector:
- Prevent peers with matching IPs from joining same tunnel.
        Match 0-4 bytes of IP (0=off, 1=most restrictive, 4=least).
        Default is 2 (disallow routers in same /16).
        Set with router.defaultPool.IPRestriction=x
      - Comment out unused RebuildPeriod pool setting
      - Add random key to pool in preparation for XOR peer ordering
2008-03-07 14:36:40 +00:00
zzz
5ef325408c Optimize naming lookups for a destkey 2008-03-07 14:31:36 +00:00
zzz
d957712e88 Change a common wtf error to a warn 2008-03-06 14:03:49 +00:00
zzz
9233c2ed4c Add create account link in susimail 2008-03-06 14:00:33 +00:00
zzz
7ad54eb749 * Stats: Add code to disable most stats to save memory.
Set on configstats.jsp or set stat.full=false to disable the stats.
      (true by default for now)
2008-03-05 18:44:45 +00:00
zzz
5740e20c62 -3201 2008-03-05 14:25:39 +00:00
zzz
0a9114fc2f add a i2psnark StartAll button 2008-03-05 14:20:02 +00:00
zzz
71ddfa42e1 * i2psnark: Don't do a naming lookup for Base64 destkeys 2008-03-05 14:19:19 +00:00
zzz
1ecb84f3fb * Naming: Make HostsTxt the sole default NamingService
(was Meta = PetName + HostsTxt)
    * Naming: Add two new experimental NamingServices, EepGet and Exec,
      not enabled by default -
      see source comments in core/java/src/net/i2p/client/naming
      for configuration instructions
2008-03-05 14:16:04 +00:00
zzz
c46b06fb81 Fix netdb.knownLeaseSets count reported by floodfill routers 2008-03-01 16:13:41 +00:00
zzz
a7397879aa correct instructions 2008-02-29 16:50:37 +00:00
zzz
9b86da7ce5 upcoming .32 news 2008-02-29 14:51:24 +00:00
zzz
c68977ca8c * i2ptunnel: Add 3-hop option to edit.jsp to match configtunnels.jsp
* i2psnark: Remove orion and gaytorrents from default tracker list
    * Remove orion from jump list and from eepsite_index.html
    * Jbigi: Change jbigi version to 4.2.2 in build scripts - tested by amiga
    * Capitalize OutboundMessageDistributor job name
    * TunnelPool: Add a warning if all tunnels are backlogged
2008-02-27 15:18:32 +00:00
zzz
bc7bd628db * Reintroduce NTCP backlog pushback, with switch back to
previous tunnel when no longer backlogged
    * Catch an nio exception in an NTCP logging statement if loglevel is WARN
    * IRC Proxy: terminate all messages with \r\n (thanks TrivialPursuit!)
2008-02-26 19:33:11 +00:00
zzz
bc7ab39131 +x the c build files 2008-02-21 14:53:50 +00:00
zzz
100163e03b * Raise inbound default bandwidth to 32KBps
* Fix config.jsp that showed 0KBps share bandwidth by default
2008-02-21 14:05:33 +00:00
zzz
49c02f13b2 -5 2008-02-19 15:39:44 +00:00
zzz
40f072e25e Display capabilities on profiles.jsp 2008-02-19 15:32:39 +00:00
zzz
918b1acb8f * Tunnels: Enforce max tunnel length of 8, catch an index error
http://forum.i2p/viewtopic.php?t=2561
2008-02-19 15:28:41 +00:00
zzz
bc16078e3f Remove some stats from netDb, effective in .32 2008-02-19 15:22:46 +00:00
zzz
4a8dbd0634 * Addressbook: Disallow '--' in host names except in IDN,
add some reserved host names
2008-02-19 15:21:58 +00:00
zzz
38c0184f95 clarify the i2ptunnel edit page dropdowns 2008-02-19 15:20:42 +00:00
zzz
68829ddb99 merge of 'cc7dee6f711dd10db6c1f42af8dc7ba6f6b0002d'
and 'dc2fa2d01da4c7b3733d4dadb85d757d592c1fa6'
2008-02-19 15:16:53 +00:00
zzz
19089bd6a7 * Fix race in TunnelDispatcher which caused
participating tunnel count to seesaw -
      should increase network capacity
    * Leave participating tunnels in 10s batches for efficiency
    * Update participating tunnel ratestat when leaving a tunnel too,
      to generate a smoother graph
    * Fix tunnel.participatingMessageCount stat to include all
      participating tunnels, not just outbound endpoints
    * Simplify Expire Tunnel job name
2008-02-16 16:25:21 +00:00
zzz
69cc0afd1b * PersistentDataStore: Write out 300 records every 10 min
rather than 1 every 10 sec;
      Don't store leasesets to disk or read them in
    * Combine rates for pools with the same length setting
      in the new tunnel build algorithm
    * Clarify a log message in the UpdateHandler
2008-02-13 21:42:09 +00:00
zzz
d2f3a262db * Make graphs clickable to get larger graphs
* Change SimpleTimer CRIT to a WARN, increase threshold
    * Checklist update
2008-02-13 11:49:24 +00:00
dev
c1703b872d probably fixed a bug in udp-transport 2008-02-11 20:45:33 +00:00
dev
f7b0e8181b merge of '1d753ff05fc1c942cf6ce8feeadfaa577bff27ab'
and 'e20c0075f57440edcebaac0009a0ef0327d563cc'
2008-02-11 20:38:47 +00:00
zzz
43f2695901 * Add new tunnel build algorithm (preliminary)
* Change NTCP backlogged message from error to warning
    * Checklist updates
2008-02-10 18:29:21 +00:00
ea0d4ffd7f * 2008-02-10 0.6.1.31 released
2008-02-10 Complication
    * Update news and version numbers
2008-02-10 13:59:20 +00:00
dev
0ed29573a7 minor 2008-02-09 13:18:22 +00:00
zzz
1365d3e3b9 Remove orion reference in dnfh error page 2008-02-08 21:04:23 +00:00
zzz
40befb5a92 remove comments from hosts.txt 2008-02-08 19:16:48 +00:00
zzz
9c16eec361 checklist updates, +x some files 2008-02-08 00:15:32 +00:00
zzz
093c69637d * build.xml: Add some apps to javadoc
* checklist.txt: Add some things
    * news.xml: Minor edit
    * runplain.sh: Add some comments
2008-02-06 16:38:23 +00:00
zzz
134ec7acea * news.xml: make links relative
* wrapper.config: Add some comments
* checklist.txt: Add some things
2008-02-05 03:14:18 +00:00
0802a5ae40 2008-02-05 Complication
* Change the dates too (sorry for such forgetfulness!)
2008-02-04 23:52:11 +00:00
14ddfb360f 2008-02-04 Complication
* Also use the new key for checking, and add it into news.xml
2008-02-04 23:20:28 +00:00
78ad831028 2008-02-04 Complication
* Added my release signing key into TrustedUpdate.java
2008-02-04 21:34:52 +00:00
zzz
22f1684262 * NewsFetcher: Change fetch failed from error to warning
* installer: Fix URL and "email"
    * checklist.txt: New release checklist
2008-01-31 19:56:00 +00:00
zzz
83f51b4a66 2008-01-29 zzz
* Addressbook: Change default subscription
    * ConfigUpdateHandler: Change default news URL
    * initialNews.xml: Update version to .31
    * news.xml: More updates
    * hosts.txt: Add i2p-projekt.i2p
    * readme.html: More URL updates
    * SusiDNS: Change references to default subscription
2008-01-29 22:20:04 +00:00
zzz
9c28de0704 2008-01-28 zzz
* news.xml: Updates, still preliminary
    * ReseedHandler: Change default URL
    * i2ptunnel.config: Change default outproxies
    * readme.html: Change *.i2p.net URLs
    * help.jsp: Change *.i2p.net URLs
    * eepsite_index.html: Change stats.i2p addressbook subscription URL
    * hosts.txt: Add krabs.i2p, true.i2p, www.i2p2.i2p
2008-01-28 19:24:12 +00:00
zzz
c69fda2298 Drop unused directories 2008-01-27 14:48:50 +00:00
zzz
cabb22331b Drop unused directories 2008-01-27 14:42:04 +00:00
zzz
5b3aca29a8 replace orion.i2p with perv.i2p/stats.cgi 2008-01-09 22:40:33 +00:00
zzz
f35cbf59d8 * NewsFetcher: add last-modified support, reduce number of retries
* Error pages: add icon and logo,
        clarify 'destination not found' and 'proxy not found' pages
2008-01-09 04:11:12 +00:00
zzz
a96119d09b 2008-01-08 zzz
* addressbook: Limit size of subscribed hosts.txt,
        don't save old etag or last-modified data
    * EepGet: Add some logging,
        enforce size limits even when size not in returned header,
        don't return old etag or last-modified data,
        don't call transferFailed listener more than once
2008-01-09 02:15:43 +00:00
zzz
2711294aee (zzz) sign my update signing key 2008-01-09 01:56:38 +00:00
zzz
f838b1828b 2008-01-07 zzz
* profiles.jsp formatting cleanup
    * NTCP: Reduce max idle time from 60m to 20m
    * NTCP: Fix idle time on connections with zero messages,
      correctly drop these connections
2008-01-07 08:09:43 +00:00
zzz
fbf6282c1a 2008-01-03 zzz
* addressbook: Do basic validation of hostnames and destkeys
    * susidns: Add support for the private addressbook,
      update the text and links somewhat
2008-01-04 02:37:24 +00:00
zzz
5195a5c1fc 2008-01-02 zzz
* Add stats.i2p to the jump list
    * Impose 20MB limit on POSTs and catch OOMs in POST
    * eepsite_index.html: add stats.i2p services
    * addressbook: log source of new keys; disallow dests > 516 bytes
    * addressbook: convert hostnames to lower case to prevent duplicates
    * susidns: generalize references to orion
2008-01-02 22:08:48 +00:00
zzz
62b18b18b5 2007-12-29 zzz
* Tweak IRC inbound PONG filtering to fix xchat/BitchX lagometers
2007-12-30 03:45:09 +00:00
zzz
7c8f519b35 2007-12-29 zzz
* Allow commas in router.trustedUpdateKeys and router.updateURL again
2007-12-29 23:59:24 +00:00
zzz
d6fb979616 2007-12-29 zzz
* Change default news host from dev.i2p.net to dev.i2p
2007-12-29 22:15:11 +00:00
zzz
f568d21969 2007-12-29 zzz
* Change jetty timeout from 30 to 60 sec (thanks sponge!)
2007-12-29 21:08:29 +00:00
zzz
7fe9d590f5 2007-12-28 zzz
* Add zzz's update signing key
2007-12-28 20:27:16 +00:00
0a1240ebfd 2007-12-26 Complication
* Improve reseed handler (less repetitive code,
      avoid reporting errors when less than 10% of fetches fail)
2007-12-26 20:55:07 +00:00
4e68f2a157 2007-12-26 Complication
* Escape both CR, LF and CR LF line breaks in Router.saveConfig()
      and unescape them in DataHelper.loadProps() to support
      saving and loading config properties with line breaks
    * Change the update URLs textbox into a textarea like keys have,
      so different URLs go on different lines
    * Modify TrustedUpdate to provide a method which supplies a key list
      delimited with CR LF line breaks
    * Modify DEFAULT_UPDATE_URL to supply a default URL list
      delimited with CR LF line breaks
    * Modify selectUpdateURL() to handle URL lists
      delimited by any kind of line breaks
    * Start saving trusted update keys
    * Improve formatting on configupdate.jsp
2007-12-26 08:14:54 +00:00
zzz
e9bd6907d1 2007-12-22 zzz
* Add support for multiple update URLs
    * Change default for update to use i2p proxy,
      add several URLs as defaults
    * Enable trusted key form on configupdate.jsp
2007-12-22 23:58:46 +00:00
zzz
b20495c39f (zzz) Clarify the 'destination not found' error page 2007-12-22 20:54:44 +00:00
zzz
0ecbc4c27b (zzz) remove anonymitytracker from default tracker list, not seen in over 6 months 2007-12-17 02:33:05 +00:00
zzz
4d5b1d4c3f 2007-12-10 zzz
* Fix NPE in CLI TrustedUpdate keygen
2007-12-10 22:22:59 +00:00
17b719f3f7 2007-12-02 Complication
* Commit SAM v2 patch from mkvore (thank you!)
    * Minor reformatting to preserve consistent whitespace
      in old SAM classes (new classes unaltered)
2007-12-03 04:19:25 +00:00
979a3e98d8 2007-12-01 Complication
* Separate the checks "does Jetty .zip file need downloading"
      and "does Jetty .zip file need extracting" in the Jetty buildfile.
      First download (unless already done), then extract (unless done).
2007-12-02 03:13:15 +00:00
zzz
c6a1112f0a 2007-11-26 zzz
* i2psnark: add timeout for receive inactivity
2007-11-26 21:53:58 +00:00
zzz
4ebcc95d9f 2007-11-24 zzz
* i2psnark: increase streaming lib write timeout to 240 sec and change
      timeout action from "ping" to "disconect", as the fix in .30 to
      honor options on outbound connections led to hung outbound connections
      (bitfield never transmitted, connection never dropped)
2007-11-24 20:22:45 +00:00
zzz
7e59ce27fa (zzz) i2phost.i2p => i2host.i2p 2007-11-06 20:15:25 +00:00
zzz
22345a264e (zzz) fix new jump server typo 2007-11-06 19:53:06 +00:00
03739996da 2007-11-06 jrandom
* add i2phost.i2p to the jump list
2007-11-06 11:26:01 +00:00
zzz
819a72d4f6 2007-10-11 zzz
* IRC Proxy: Fix several possible anonymity holes:
      - Block CTCP in NOTICE messages
      - Block CTCP anywhere in PRIVMSG and NOTICE, not just at first character
      - Check for lower case commands
    (Thanks sponge!)
2007-10-11 06:03:21 +00:00
e480931e20 2007-10-07 jrandom
* back out the NTCP backlog pushback, as it could be used to mount an
      active anonymity attack.
2007-10-08 04:11:36 +00:00
zzz
3f01d0a69e (zzz) .30 details 2007-10-08 04:03:40 +00:00
f67f47f0cd 0.6.1.30 2007-10-08 03:09:35 +00:00
5ad6ee60eb 0.6.1.30 2007-10-08 03:01:47 +00:00
5accba6cdc 2007-10-07 Complication
* Fix an issue in EepGet whereby sending of "etag" and "lastModified" headers
      broke retrying.
2007-10-08 02:36:17 +00:00
zzz
313e1704df 2007-09-27 zzz
* Implement pushback of NTCP transport backlog to the outbound tunnel selection code
    * Clean up the NTCP and UDP tables on peers.jsp to be consistent,
      fix some of the sorting
2007-09-27 03:52:32 +00:00
zzz
cf4d2b17c9 2007-09-22 zzz
* Send messages for the same destination out the same outbound
      tunnel to reduce out-of-order delivery.
2007-09-23 02:44:34 +00:00
zzz
9145eedc35 2007-09-19 zzz
* i2psnark: Fix broken multifile torrent Delete;
        cleanup Storage resources in AddTorrent;
        don't autostart torrent after Create
2007-09-20 01:44:02 +00:00
zzz
b772179077 2007-09-18 zzz
* eepsite_index.html: Add links to trevorreznik address book
    * streaming lib: Fix SocketManagerFactory to honor options on outbound connections
    * streaming lib: Fix setDefaultOptions() when called with a ConnectionOptions parameter
    * i2psnark: Don't make outbound connections to already-connected peers
    * i2psnark: Debug logging cleanup
2007-09-18 19:09:19 +00:00
zzz
9054a196ce 2007-09-14 zzz
* eepget: Increase header timeout to 45s
    * HTTP proxy: Return a better error message for localhost requests
    * tunnels: Fix PooledTunnelCreatorConfig memory leak
2007-09-15 01:58:30 +00:00
zzz
d28a96ac7d 2007-09-09 zzz
* eepget: Add support for Last-Modified and If-Modified-Since
    * addressbook: Finish incomplete support for Last-Modified
2007-09-09 17:38:53 +00:00
zzz
9c73f80ac3 (zzz) Copy over SocketTimeout.java file from syndie for EepGet.java 2007-09-08 20:21:15 +00:00
20c46cff04 synced up with the eepget from the syndie source tree (allowing better
control of timeouts and transparent redirection).  the users of eepget
in this source tree don't necessarily use the timeout controls, though
they can be updated to do so
2007-09-08 02:24:01 +00:00
f332513755 added trevorreznik.i2p 2007-09-02 01:36:54 +00:00
zzz
cb69a66498 (zzz) .29 announcement 2007-08-24 02:34:40 +00:00
1c66543938 0.6.1.29 released 2007-08-24 00:33:28 +00:00
zzz
53ab3c472e 2007-08-13 zzz
* readme.html - Add inproxy.tino.i2p, replace search.i2p with eepsites.i2p,
      tweak the eepsite and troubleshooting sections
2007-08-13 19:42:59 +00:00
zzz
a4b221fa71 2007-08-11 zzz
* Add stats for individual tunnel rates (nice when graphed)
    * i2psnark: Fix outbound tunnel nickname
2007-08-11 20:48:14 +00:00
e3e1d0842d 2007-08-05 Complication
* Update the sharing calculator on config.jsp
      and explain the trade-off even more thoroughly.
2007-08-06 03:35:42 +00:00
99b5bf9609 2007-08-04 Complication
* Lower the threshold between the K and L bandwidth class,
      so that K is now < 12 KB/s, instead of <= 16 KB/s.
      Hopefully this lets people with 128 kbit/s (16 KB/s) upload lines
      participate in routing, if they keep the default share percentage.
2007-08-05 03:25:30 +00:00
zzz
da10fe0df7 (zzz) ask for bandwidth 2007-07-19 18:05:44 +00:00
zzz
9fd5ba7b2d 2007-07-16 zzz
* i2psnark: Add tooltip info for choked/uninterested
2007-07-16 21:15:51 +00:00
zzz
05b5df9d76 2007-07-16 zzz
* Make selection of graphed data configurable via configstats.jsp,
      remove most of the default graphs to save some memory
2007-07-16 20:47:57 +00:00
zzz
5c1dc79767 2007-07-15 zzz
* Add current values to graph legends
    * Fix up previous Rate fix to check for divide by zero
2007-07-15 18:34:33 +00:00
4acd2996c5 2007-07-14 Complication
* Take the post-download routerInfo size check back out of ReseedHandler,
      since it wasn't helpful, and a lower limit caused false warnings.
    * Give EepGet ability to enforce a min/max HTTP response size.
    * Enforce a maximum response size of 8 MB when ReseedHandler
      downloads into a ByteArrayOutputStream.
    * Refactor ReseedHandler/ReseedRunner from static to ordinary classes,
      change invocation from RouterConsoleRunner accordingly.
    * Add an EepGet status listener to ReseedHandler to log causes of reseed failure,
      provide status reports to indicate the progress of reseeding.
    * Enable icon for default eepsite, and the index page
      of the router console (more later).
2007-07-15 00:56:18 +00:00
zzz
16fa6a89bc 2007-07-14 zzz
* Clean up graphs.jsp - set K=1024 where appropriate,
      output image sizes in html, catch ooms, other minor tweaks
    * Fix current event count truncation which fixes graphs with low
      60-sec event counts displaying high values
      (bw.* and router.* graphs for example were 1.5x too high)
      Affects all "events per period" (non-lifetime) counts.
2007-07-14 18:44:11 +00:00
zzz
2a72e8574b 2007-07-09 zzz
* i2psnark: give a better error message for a non-i2p torrent
2007-07-10 01:20:37 +00:00
zzz
d4a1bcf28f 2007-07-07 zzz
* Add auto-detect IP/Port to NTCP. When enabled on config.jsp,
      SSU will notify/restart NTCP when the external address changes.
      Now you can enable inbound TCP without a static IP or dyndns service.
2007-07-07 20:03:50 +00:00
zzz
409b71def5 2007-07-04 zzz
* Display calculated share bandwidth and remove load testing
      on config.jsp
2007-07-04 22:58:48 +00:00
zzz
2dc5fbda02 2007-07-01 zzz
* Replace broken option i2np.udp.alwaysPreferred with
      i2np.udp.preferred and adjust UDP bids; possible settings are
      "false" (default), "true", and "always".
      Default setting results in same behavior as before
      (NTCP is preferred unless it isn't established and UDP is established).
      Use to compare NTCP and UDP transports.
2007-07-01 22:07:52 +00:00
71aaf03d09 2007-06-27 jrandom
* fix for a streaming lib bug that could leave a thread waiting
      indefinitely (thanks Complication!)
2007-06-28 01:51:16 +00:00
30c99e630b 2007-06-16 Complication
* First pass on EepGet and ReseedHandler improvements,
      please avoid use on routers which matter!
    * Give EepGet ability of downloading into an OutputStream,
      such as the ByteArrayOutputStream of ReseedHandler.
    * Detect failure to reseed better, report it persistently
      and more verbosely, provide a link to logs
      and suggest manual reseed.
2007-06-16 23:15:49 +00:00
445b39171a 2007-05-06 Complication
* spelling correction to history.txt
2007-05-06 20:02:04 +00:00
571c2d6047 2007-05-06 Complication
* Fix the build.xml file, so the preppkg build target won't try copying files
      which became deprecated with the old Syndie (thank for alerting, itsu!)
2007-05-06 19:52:39 +00:00
zzz
42ff763933 (zzz) 4-10 2007-04-11 06:01:35 +00:00
zzz
727edc3ff9 (zzz) add 204 log link 2007-04-08 04:18:18 +00:00
zzz
82a4758a0a (zzz) 3-27 and 4-3 2007-04-05 06:11:21 +00:00
zzz
915914ebb3 2007-03-31 zzz
* Add trevorreznik jump server to the http proxy error page
    * Add anonymity to the trackers supporting details links in i2psnark
2007-03-31 21:50:51 +00:00
zzz
c438b56378 (zzz) 3-20 2007-03-25 22:58:18 +00:00
zzz
307ccfb1b4 2007-03-24 zzz
* Remove Syndie from build targets and navbar
2007-03-24 07:57:37 +00:00
zzz
34e23259b4 2007-03-22 zzz
* i2psnark tracker handling tweaks:
    -   Add link to tracker details page (Postman only for now, requires bytemonsoon patch)
    -   Add Base URL to tracker list configuration
    -   Web page links built from tracker list Base URLs
    -   Only build and sort tracker list once
    -   Add anonymityWeb tracker to default list
    -   Add tooltip info for TrackerErrs
    -   Stop torrent if not registered with tracker
    -   Mark temp files as delete on exit
2007-03-22 05:21:25 +00:00
zzz
036802d66a 2007-03-18 zzz
* i2psnark: Cleanup some handling of saved partial pieces
    * i2psnark: Put bit counting in Bitfield.java for efficiency
    * i2psnark: Save torrent completion state in i2psnark.config
2007-03-18 21:57:01 +00:00
zzz
6a7dbc8e3a (zzz) i2psnark: Save torrent completion status in i2psnark.config 2007-03-18 21:43:41 +00:00
zzz
cf4a9ffc27 (zzz) i2psnark: Put bit counting in Bitfield.java for efficiency 2007-03-18 21:28:28 +00:00
zzz
6ef4adf318 (zzz) i2psnark: Remove saved partial when halted, don't save partial when halted 2007-03-18 21:25:18 +00:00
zzz
f84c9bf3b1 (zzz) .28 news 2007-03-18 02:17:16 +00:00
da0837bd58 ugh, jrandom sucks. 0.6.1.28 2007-03-18 01:47:59 +00:00
9094a62273 oops 2007-03-17 21:19:37 +00:00
026183a655 * 2007-03-17 0.6.1.2 released 2007-03-17 21:18:12 +00:00
zzz
b033c7945c (zzz) i2psnark: clarify that total uploader value is peers on UI. (thanks jadeSerpent) 2007-03-15 18:26:06 +00:00
zzz
0c2dcf0845 (zzz) 3-13 mtg 2007-03-15 08:21:35 +00:00
zzz
b6e597e5bf 2007-03-13 zzz
* i2psnark: Make max total uploaders configurable (thanks Amiga4000!)
2007-03-14 04:06:27 +00:00
ae402baa71 2007-03-12 jrandom
* dodge a race on startup (thanks zzz!)
2007-03-12 18:19:57 +00:00
zzz
d6c8a4d9eb 2007-03-10 zzz
* Streaming lib: Change initial RTT deviation from RTT to RTT/2
      to reduce early RTO values
2007-03-10 08:45:27 +00:00
zzz
8e2849b7e5 2007-03-08 zzz
* i2psnark changes to improve upload performance:
    *  Implement total uploader limit (10)
    *  Don't timeout non-piece messages out
    *  Change chunk size to 32K (was 64K)
    *  Change request limit to 64K (was 256K)
    * i2psnark: Disconnect from seeds when complete
2007-03-08 18:55:17 +00:00
zzz
0aa0cd330f (zzz) take dynamic router keys off the config page 2007-03-07 21:05:21 +00:00
zzz
0960cafaf5 2007-03-07 zzz
* Streaming lib changes to improve upstream performance during congestion:
    *   Change min window size from 12 to 1
    *   Change max timeout from 10 to 45 sec
    *   Change initial timeout from 10 to 15 sec
    *   Change intial window size for i2psnark from 12 to 1
    *   Change slow start growth rate for i2psnark from 1/2 to 1
2007-03-07 05:11:45 +00:00
zzz
2088a28053 (zzz) Mar. 04 - More tweaks to the default eepsite start page to help
people start up the tunnel which is now stopped by default
2007-03-04 23:25:18 +00:00
zzz
a5c4ba3bff 2007-03-03 zzz
* Upgrade from Jetty 5.1.6 to 5.1.12 which fixes spaces in URL
    * Add a updaterWithJetty build target
2007-03-03 20:30:52 +00:00
zzz
1bbd2cf52e 2007-03-03 zzz
* Implement priority sending for NTCP
    * Disable trimForOverload() in tunnel BuildExecutor which
      was preventing tunnel builds when outbound traffic was high
      (i.e. most of the time when running i2psnark)
2007-03-03 20:11:05 +00:00
zzz
ce50efa60c (zzz) 02-28 i2psnark file reopen cleanup 2007-03-01 02:29:11 +00:00
zzz
a3c64a9ba3 (zzz) 02-28 add peer details to i2psnark web page 2007-03-01 02:22:14 +00:00
zzz
1447164a8a (zzz) 2-20 2007-02-24 09:15:29 +00:00
zzz
bc0bf8d7ff (zzz) Fix Syndie URL typo 2007-02-22 08:51:14 +00:00
zzz
9f346f488e (zzz) add links 2007-02-17 02:07:45 +00:00
zzz
760d7d9704 (zzz) 0.6.1.27 2007-02-16 19:35:09 +00:00
77310e17d1 * 2007-02-15 0.6.1.27 released
2007-02-15  jrandom
    * Limit the whispering floodfill sends to at most 3 randomly
      chosen from the known floodfill peers
2007-02-15 23:25:04 +00:00
e54b964929 2007-02-14 jrandom
* Don't filter out KICK and H(ide oper status) IRC messages
      (thanks Takk and postman!)
2007-02-14 21:35:43 +00:00
809f3e847b 2007-02-13 jrandom
* Tell our peers about who we know in the floodfill netDb every
      6 hours or so, mitigating the situation where peers lose track
      of floodfill routers.
    * Disable the Syndie updater (people should use the new Syndie,
      not this one)
    * Disable the eepsite tunnel by default
2007-02-14 04:33:36 +00:00
zzz
f4beebe60d (zzz) 02-13 2007-02-14 03:04:11 +00:00
827e427f0b added trac.i2p 2007-02-12 10:26:21 +00:00
zzz
c02125511d (zzz) 02-06 2007-02-08 18:51:58 +00:00
zzz
1aa1069b6f (zzz) 01-30 2007-02-02 02:50:43 +00:00
zzz
91d281077d 2007-01-30 zzz
* i2psnark: Don't hold _snarks lock while checking a snark,
      so web page is responsive at startup
2007-01-30 08:58:19 +00:00
zzz
f339dec024 2007-01-29 zzz
* i2psnark: Add NickyB tracker
2007-01-30 04:05:21 +00:00
zzz
2aeef44f8d 2007-01-28 zzz
* i2psnark: Don't hold sendQueue lock while flushing output,
      to make everything run smoother
2007-01-29 04:03:36 +00:00
zzz
0fd41a9490 2007-01-27 zzz
* i2psnark: Fix orphaned Snark reader tasks leading to OOMs
2007-01-28 02:30:05 +00:00
58f10d14b2 2007-01-20 Complication
* Drop overlooked comment
2007-01-21 03:49:41 +00:00
46ca42ddf8 2007-01-20 Complication
* Modify ReseedHandler to query the "i2p.reseedURL" property from I2PAppContext
      instead of System, so setting a reseed URL in advanced configuration has effect.
    * Clean out obsolete reseed code from ConfigNetHandler.
2007-01-21 03:35:49 +00:00
zzz
e6e6d6f4ee 2007-01-20 zzz
* Improve performance by not reading in the whole
      piece from disk for each request. A huge memory savings
      on 1MB torrents with many peers.
2007-01-21 01:43:31 +00:00
zzz
8a87df605b 2007-01-20 zzz
* i2psnark: More choking rotation tweaks
    * Improve performance by not reading in the whole
      piece from disk for each request. A huge memory savings
      on 1MB torrents with many peers.
2007-01-21 00:35:09 +00:00
zzz
8ca085bceb (zzz) 1/16 2007-01-19 03:04:49 +00:00
zzz
df47587db0 * Add new HTTP Proxy error message for non-http protocols 2007-01-18 01:42:13 +00:00
zzz
d705e0ad04 2007-01-17 zzz
* Add note on Syndie index.html steering people to new Syndie
2007-01-17 05:13:27 +00:00
zzz
40d209dd7c 2007-01-17 zzz
* i2psnark: Fix crash when autostart off and
      tcrrent started manually
2007-01-16 06:20:23 +00:00
zzz
7f2a0457bf 2007-01-16 zzz
* i2psnark: Fix bug caused by last i2psnark checkin
      (ConnectionAcceptor not started)
    * Don't start PeerCoordinator, ConnectionAcceptor,
      and TrackerClient unless starting torrent
2007-01-16 04:47:58 +00:00
f4749f2483 2007-01-15 jrandom
* small guard against unnecessary streaming lib reset packets
      (thanks Complication!)
2007-01-15 06:35:59 +00:00
zzz
9c42830076 2007-01-15 zzz
* i2psnark: Add 'Stop All' link on web page
    * Add some links to trackers and forum on web page
    * Don't start tunnel if 'Autostart' unchecked
    * Fix torrent restart bug by reopening file descriptors
2007-01-15 04:36:03 +00:00
zzz
53ba6c2a64 2007-01-14 zzz
* i2psnark: Improvements for torrents with > 4 leechers:
      choke based on upload rate when seeding, and
      be smarter and fairer about rotating choked peers.
    * Handle two common i2psnark OOM situations rather
      than shutting down the whole thing.
    * Fix reporting to tracker of remaining bytes for
      torrents > 4GB (but ByteMonsoon still has a bug)
2007-01-14 19:49:33 +00:00
zzz
61b3f21f69 (zzz) 01-09 2007-01-13 01:15:04 +00:00
zzz
506fd5f889 (zzz) Jan. 2 2007-01-03 03:19:06 +00:00
zzz
d538f888b4 (zzz) 12-19,12-26 2006-12-28 09:11:30 +00:00
976c5fdd47 new syndie httpserv 2006-12-16 22:31:07 +00:00
zzz
b63f3437f2 (zzz) 12-12 mtg 2006-12-13 06:40:15 +00:00
zzz
e760f2e538 (zzz) 12/5 mtg 2006-12-06 02:52:13 +00:00
zzz
17c8fca779 (zzz) 11-28 mtg 2006-11-29 01:37:18 +00:00
zzz
87fda382c3 (zzz) 11/14 and 11/21 mtgs 2006-11-22 02:31:23 +00:00
zzz
1e404cd7ac 2006-10-29 zzz
* i2psnark: Fix and enable generation of multifile torrents,
      print error if no tracker selected at create-torrent,
      fix stopping a torrent that hasn't started successfully,
      add eBook and GayTorrents trackers to form,
      web page formatting tweaks
2006-11-10 01:44:35 +00:00
zzz
098f99d806 (zzz) 11/7 mtg 2006-11-10 01:34:54 +00:00
zzz
da93f96035 (zzz) 10/31 mtg 2006-11-02 02:39:07 +00:00
ead39cc87e 2006-10-29 Complication
* Ensure we get NTP samples from more diverse sources
      (0.pool.ntp.org, 1.pool.ntp.org, etc)
    * Discard median-based peer skew calculator as framed average works,
      and adjusting its percentage can make it behave median-like
    * Require more data points (from at least 20 peers)
      before considering a peer skew measurement reliable
2006-10-29 19:29:50 +00:00
zzz
e4e3c44459 (zzz) 10/24 2006-10-25 08:22:09 +00:00
zzz
af151e32e5 (zzz) 10/17 mtg 2006-10-21 08:22:24 +00:00
12819a2a17 added mtn.i2p 2006-10-15 02:10:36 +00:00
87eedff254 0.6.1.26 2006-10-09 05:10:22 +00:00
4c59cd7621 * 2006-10-10 0.6.1.26 released
2006-10-10  jrandom
    * Removed the status display from the console, as its more confusing
      than informative (though the content is still displayed in the HTML)
2006-10-09 01:44:47 +00:00
ef707e7956 2006-10-08 Complication
* Update comment to reflect current status
2006-10-08 23:10:58 +00:00
73cf3fb299 2006-10-08 Complication
* Add a framed average peer clock skew calculator
    * Add config property "router.clockOffsetSanityCheck" to determine
      if NTP-suggested clock offsets get sanity checked (default "true")
    * Reject NTP-suggested clock offsets if they'd increase peer clock skew
      by more than 5 seconds, or make it more than 20 seconds total
    * Decrease log level in getMedianPeerClockSkew()
2006-10-08 22:52:59 +00:00
zzz
80b0c97d72 (zzz) 10/3 status notes 2006-10-04 19:41:09 +00:00
zzz
5cf85c1d7b (zzz)
* i2psnark: Second try at synchronization fix - synch addRequest()
      completely rather than just portions of it and requestNextPiece()
2006-09-29 23:54:17 +00:00
c14e52ceb5 2006-09-27 jrandom
* added HMAC-SHA256
    * properly use CRLF with EepPost
    * suppress jbigi/jcpuid messages if jbigi.dontLog/jcpuid.dontLog is set
    * PBE session key generation (with 1000 rounds of SHA256)
    * misc SDK helper functions
2006-09-27 06:00:33 +00:00
32a579e480 2006-09-26 Complication
* Take back another inadverent logging change in NTCPConnection
2006-09-27 04:44:13 +00:00
0a240a4436 2006-09-26 Complication
* Take back an accidental log level change
2006-09-27 04:31:34 +00:00
9325b806e4 2006-09-26 Complication
* Subclass from Clock a RouterClock which can access router transports,
      with the goal of developing it to second-guess NTP results
    * Make transports report clock skew in seconds
    * Adjust renderStatusHTML() methods accordingly
    * Show average for NTCP clock skews too
    * Give transports a getClockSkews() method to report clock skews
    * Give transport manager a getClockSkews() method to aggregate results
    * Give comm system facade a getMedianPeerClockSkew() method which RouterClock calls
      (to observe results, add "net.i2p.router.transport.CommSystemFacadeImpl=WARN" to
logging)
    * Extra explicitness in NTCP classes to denote unit of time.
    * Fix some places in NTCPConnection where milliseconds and seconds were confused
2006-09-27 04:02:13 +00:00
zzz
ef2e24ea11 (zzz)
* i2psnark: Paranoid copy before writing pieces,
      recheck files on completion, redownload bad pieces
    * i2psnark: Don't contact tracker as often when seeding
2006-09-26 03:11:39 +00:00
zzz
373934c6e0 (zzz)
* i2psnark: Add some synchronization to prevent rare problem
      after restoring orphan piece
2006-09-24 18:30:22 +00:00
zzz
e8e8bac694 (zzz)
* i2psnark: Eliminate duplicate requests caused by i2p-bt's
      rapid choke/unchokes
    * i2psnark: Truncate long TrackerErr messages on web page
2006-09-20 22:39:24 +00:00
zzz
23e8a558c2 (zzz)
* i2psnark: Implement retransmission of requests. This
      eliminates one cause of complete stalls with a peer.
      This problem is common on torrents with a small number of
      active peers where there are no choke/unchokes to kickstart things.
2006-09-16 21:07:28 +00:00
46f2645834 2006-09-14 Complication
* news.xml update
2006-09-14 03:16:53 +00:00
zzz
2329439034 (zzz)
* i2psnark: Fix restoral of partial pieces broken by last patch
2006-09-14 02:37:32 +00:00
zzz
6d400368b9 (zzz) changelog date fix 2006-09-13 23:24:14 +00:00
zzz
26c13b40fe (zzz)
* i2psnark: Mark a peer's requests as unrequested on disconnect,
      preventing premature end game
    * i2psnark: Randomize selection of next piece during end game
    * i2psnark: Don't restore a partial piece to a peer that is already working on it
    * i2psnark: strip ".torrent" on web page
    * i2psnark: Limit piece size in generated torrent to 1MB max
2006-09-13 23:02:07 +00:00
zzz
9fd0e95fe8 (zzz) 9/12 status 2006-09-13 17:34:58 +00:00
zzz
7e21f2c92b (zzz)
* i2psnark: Add "Stalled" indication and stat totals on web page
2006-09-10 01:55:37 +00:00
zzz
c9d8e796c6 (zzz)
* i2psnark: Fix bug where new peers would always be set to "interested"
      regardless of actual interest
    * i2psnark: Reduce max piece size from 10MB to 1MB; larger may have severe
      memory and efficiency problems
2006-09-09 22:15:05 +00:00
zzz
e7203f5d46 (zzz) 0.6.1.25 2006-09-09 21:19:49 +00:00
22d76a1b64 * 2006-09-09 0.6.1.25 released 2006-09-09 17:46:21 +00:00
0903dc46c6 2006-09-08 jrandom
* Tweak the PRNG logging so it only displays error messages if there are
      problems
    * Disable dynamic router keys for the time being, as they don't offer
      meaningful security, may hurt the router, and makes it harder to
      determine the network health.  The code to restart on SSU IP change is
      still enabled however.
    * Disable tunnel load testing, leaning back on the tiered selection for
      the time being.
    * Spattering of bugfixes
2006-09-09 01:41:57 +00:00
zzz
0f56ec8078 (zzz) oops remove duplicate 2006-09-07 23:26:53 +00:00
zzz
70ee1df2bf (zzz)
* i2psnark: Increase output timeout from 2 min to 4 min
    * i2psnark: Orphan debug msg cleanup
    * i2psnark: More web rate report cleanup
2006-09-07 23:22:12 +00:00
zzz
61a6a29bec cCVS: ---------------------------------------------------------------------- 2006-09-07 23:03:18 +00:00
zzz
678f7d8f72 (zzz)
* i2psnark: Implement basic partial-piece saves across connections
    * i2psnark: Implement keep-alive sending. This will keep non-i2psnark clients
      from dropping us for inactivity but also renders the 2-minute transmit-inactivity
      code in i2psnark ineffective. Will have to research why there is transmit but
      not receive inactivity code. With the current connection limit of 24 peers
      we aren't in any danger of keeping out new peers by keeping inactive ones.
    * i2psnark: Increase CHECK_PERIOD from 20 to 40 since nothing happens in 20 seconds
    * i2psnark: Fix dropped chunk handling
    * i2psnark: Web rate report cleanup
2006-09-06 06:32:53 +00:00
zzz
b92ee364bc (zzz)
* i2psnark: Report cleared trackerErr immediately
    * i2psnark: Add trackerErr reporting after previous success; retry more quickly
    * i2psnark: Set up new connections more quickly
    * i2psnark: Don't delay tracker fetch when setting up lots of connections
    * i2psnark: Reduce MAX_UPLOADERS from 12 to 4
2006-09-04 08:26:21 +00:00
zzz
aef19fcd38 (zzz) i2psnark: enable pipelining, set tunnel length default to 1 + 0-1 2006-09-04 06:01:53 +00:00
zzz
3b01df1d2c (zzz) Add rate reporting to i2psnark 2006-09-03 09:12:22 +00:00
4aed23b198 2006-09-03 Complication
* Limit form size in SusiDNS to avoid exceeding a POST size limit on postback
    * Print messages about addressbook size to give better overview
    * Enable delete function in published addressbook
2006-09-03 06:37:46 +00:00
03e8875c27 2006-08-21 Complication
* Fix error reporting discrepancy (thanks for helping notice, yojoe!)
2006-08-21 05:55:33 +00:00
48921a0875 2006-08-03 Complication
* news.xml update
2006-08-04 00:49:40 +00:00
633fabb09e 2006-08-03 jrandom
* Decrease the recently modified tunnel building timeout, though keep
      the scaling on their processing
2006-08-03 22:34:24 +00:00
bc42c26d94 2006-07-31 jrandom
* Increase the tunnel building timeout
    * Avoid a rare race (thanks bar!)
    * Fix the bandwidth capacity publishing code to factor in share percentage
      and outbound throttling (oops)
2006-08-01 02:26:52 +00:00
3c09ca3359 2006-07-29 Complication
* Treat NTP responses from unexpected stratums like failures
2006-07-30 05:08:20 +00:00
zzz
1e9e7dd345 (zzz) 0.6.1.24 2006-07-29 23:02:57 +00:00
034803add7 * 2006-07-28 0.6.1.24 released 2006-07-29 18:03:14 +00:00
b25bb053bb 2006-07-28 jrandom
* Don't try to reverify too many netDb entries at once (thanks
      cervantes and Complication!)
2006-07-29 04:41:15 +00:00
9bd0c79441 2006-07-28 jrandom
* Actually fix the threading deadlock issue in the netDb (removing
      the synchronized access to individual kbuckets while validating
      individual entries) (thanks cervantes, postman, frosk, et al!)
2006-07-29 01:11:50 +00:00
06b8670410 * 2006-07-27 0.6.1.23 released 2006-07-28 03:34:59 +00:00
6577ae499f 2006-07-27 jrandom
* Cut down NTCP connection establishments once we know the peer is skewed
      (rather than wait for full establishment before verifying)
    * Removed a lock on the stats framework when accessing rates, which
      shouldn't be a problem, assuming rates are created (pretty much) all at
      once and merely updated during the lifetime of the jvm.
2006-07-27 23:40:00 +00:00
54bc5485ec oops, thanks bar! 2006-07-27 07:06:55 +00:00
84b741ac98 2006-07-27 jrandom
* Further NTCP write status cleanup
    * Handle more oddly-timed NTCP disconnections (thanks bar!)
2006-07-27 06:20:25 +00:00
c48c419d74 quick prng workaround 2006-07-27 01:34:31 +00:00
fb2e795add 2006-07-26 jrandom
* When dropping a netDb router reference, only accept newer
      references as part of the update check
    * If we have been up for a while, don't accept really old
      router references (published 2 or more days ago)
    * Drop router references once they are no longer valid, even if
      they were allowed in due to the lax restrictions on startup
2006-07-27 01:04:59 +00:00
ec215777ec 2006-07-26 jrandom
* When dropping a netDb router reference, only accept newer
      references as part of the update check
    * If we have been up for a while, don't accept really old
      router references (published 2 or more days ago)
    * Drop router references once they are no longer valid, even if
      they were allowed in due to the lax restrictions on startup
2006-07-27 00:56:49 +00:00
d4e0f27c56 2006-07-26 jrandom
* Every time we create a new router identity, add an entry to the
      new "identlog.txt" text file in the I2P install directory.  For
      debugging purposes, publish the count of how many identities the
      router has cycled through, though not the identities itself.
    * Cleaned up the way the multitransport shitlisting worked, and
      added per-transport shitlists
    * When dropping a router reference locally, first fire a netDb
      lookup for the entry
    * Take the peer selection filters into account when organizing the
      profiles (thanks Complication!)
    * Avoid some obvious configuration errors for the NTCP transport
      (invalid ports, "null" ip, etc)
    * Deal with some small NTCP bugs found in the wild (unresolveable
      hosts, strange network discons, etc)
    * Send our netDb info to peers we have direct NTCP connections to
      after each 6-12 hours of connection uptime
    * Clean up the NTCP reading and writing queue logic to avoid some
      potential delays
    * Allow people to specify the IP that the SSU transport binds on
      locally, via the advanced config "i2np.udp.bindInterface=1.2.3.4"
2006-07-26 06:36:18 +00:00
e1c686baa6 2006-07-18 Complication
* URL and date fix in news.xml
2006-07-18 23:35:54 +00:00
d57af1aef4 remove 1.5ism 2006-07-18 20:20:08 +00:00
a52dd57215 * 2006-07-18 0.6.1.22 released
2006-07-18  jrandom
    * Add a failsafe to the NTCP transport to make sure we keep
      pumping writes when we should.
    * Properly reallow 16-32KBps routers in the default config
      (thanks Complication!)
2006-07-18 20:08:00 +00:00
65138357d3 2006-07-16 Complication
* Collect tunnel build agree/reject/expire statistics
      for each bandwidth tier of peers (and peers of unknown tiers,
      even if those shouldn't exist)
2006-07-16 17:20:46 +00:00
f6320696dd 2006-07-14 jrandom
* Improve the multitransport shitlisting (thanks Complication!)
    * Allow routers with a capacity of 16-32KBps to be used in tunnels under
      the default configuration (thanks for the stats Complication!)
    * Properly allow older router references to load on startup
      (thanks bar, Complication, et al!)
    * Add a new "i2p.alwaysAllowReseed" advanced config property, though
      hopefully today's changes should make this unnecessary (thanks void!)
    * Improved NTCP buffering
    * Close NTCP connections if we are too backlogged when writing to them
2006-07-14 18:08:44 +00:00
900d8a2026 oops, test method. thanks cervantes 2006-07-07 19:04:24 +00:00
ccc9a87e8c unnecessary 2006-07-07 18:59:16 +00:00
208634e5de 2006-07-04 jrandom
* New NIO-based tcp transport (NTCP), enabled by default for outbound
      connections only.  Those who configure their NAT/firewall to allow
      inbound connections and specify the external host and port
      (dyndns/etc is ok) on /config.jsp can receive inbound connections.
      SSU is still enabled for use by default for all users as a fallback.
    * Substantial bugfix to the tunnel gateway processing to transfer
      messages sequentially instead of interleaved
    * Renamed GNU/crypto classes to avoid name clashes with kaffe and other
      GNU/Classpath based JVMs
    * Adjust the Fortuna PRNG's pooling system to reduce contention on
      refill with a background thread to refill the output buffer
    * Add per-transport support for the shitlist
    * Add a new async pumped tunnel gateway to reduce tunnel dispatcher
      contention
2006-07-04 21:17:44 +00:00
3d07205c9d 2006-07-01 Complication
* Ensure that the I2PTunnel web interface won't update tunnel settings
      for shared clients when a non-shared client is modified
      (thanks for spotting, BarkerJr!)
2006-07-01 22:44:34 +00:00
zzz
f0a424a93f (zzz) .21, 6-13 mtg 2006-06-15 22:15:09 +00:00
f9b59ee07d 2006-06-14 cervantes
* Small tweak to I2PTunnel CSS, so it looks better with desktops
      that use Bitstream Vera fonts @ 96 dpi
2006-06-14 05:24:34 +00:00
b92b9d2618 * 2006-06-14 0.6.1.21 released 2006-06-14 02:17:40 +00:00
a3db9429a7 2006-06-13 jrandom
* Use a minimum uptime of 2 hours, not 4 (oops)
2006-06-13 23:29:51 +00:00
291a5c9578 2006-06-13 jrandom
* Cut down the proactive rejections due to queue size - if we are
      at the point of having decrypted the request off the queue, might
      as well let it through, rather than waste that decryption
2006-06-13 09:38:48 +00:00
0a3281c279 2006-06-11 Kloug
* Bugfix to the I2PTunnel IRC filter to support multiple concurrent
      outstanding pings/pongs
2006-06-11 19:48:37 +00:00
23f30ba576 2006-06-10 jrandom
* Further reduction in proactive rejections
2006-06-10 20:14:57 +00:00
f3de85c4de 2006-06-09 jrandom
* Don't let the pending tunnel request queue grow beyond reason
      (letting things sit for up to 30s when they fail after 10s
      seems a bit... off)
2006-06-10 00:34:44 +00:00
a3a4888e0b 2006-06-08 jrandom
* Be more conservative in the proactive rejections
2006-06-09 01:02:40 +00:00
6fd7881f8e thanks bar 2006-06-05 06:41:11 +00:00
381f716769 2006-06-04 Complication
* Stop sending a blank line before USER in susimail.
      Seemed to break in rare cases, thanks for reporting, Brachtus!
2006-06-05 01:33:03 +00:00
f2078e1523 * 2006-06-04 0.6.1.20 released
2006-06-04  jrandom
    * Reduce the SSU ack frequency
    * Tweaked the tunnel rejection settings to reject less aggressively
2006-06-04 22:25:08 +00:00
f2fb87c88b 2006-05-31 jrandom
* Only send netDb searches to the floodfill peers for the time being
    * Add some proof of concept filters for tunnel participation.  By default,
      it will skip peers with an advertised bandwith of less than 32KBps or
      an advertised uptime of less than 2 hours.  If this is sufficient, a
      safer implementation of these filters will be implemented.
2006-05-31 23:23:37 +00:00
fcbea19478 2006-05-30 Complication
* weekly news.xml update
2006-05-31 03:19:24 +00:00
92f25bd4fa 2006-05-18 Complication
* news.xml update
2006-05-19 00:17:10 +00:00
85c2c11217 * 2006-05-18 0.6.1.19 released
2006-05-18  jrandom
    * Made the SSU ACKs less frequent when possible
2006-05-18 22:31:06 +00:00
de1ca4aea4 2006-05-17 Complication
* Fix some oversights in my previous changes:
      adjust some loglevels, make a few statements less wasteful,
      make one comparison less confusing and more likely to log unexpected values
2006-05-18 03:42:55 +00:00
a0f865fb99 2006-05-17 jrandom
* Make the peer page sortable
    * SSU modifications to cut down on unnecessary connection failures
2006-05-18 03:00:48 +00:00
2c3fea5605 2006-05-16 jrandom
* Further shitlist randomizations
    * Adjust the stats monitored for detecting cpu overload when dropping new
      tunnel requests
2006-05-16 18:34:08 +00:00
ba1d88b5c9 2006-05-15 jrandom
* Add a load dependent throttle on the pending inbound tunnel request
      backlog
    * Increased the tunnel test failure slack before killing a tunnel
2006-05-15 17:33:02 +00:00
2ad715c668 2006-05-13 Complication
* Update the build number too
2006-05-14 05:11:36 +00:00
5f17557e54 2006-05-13 Complication
* Separate growth factors for tunnel count and tunnel test time
    * Reduce growth factors, so probabalistic throttle would activate
    * Square probAccept values to decelerate stronger when far from average
    * Create a bandwidth stat with approximately 15-second half life
    * Make allowTunnel() check the 1-second bandwidth for overload
      before doing allowance calculations using 15-second bandwidth
    * Tweak the overload detector in BuildExecutor to be more sensitive
      for rising edges, add ability to initiate tunnel drops
    * Add a function to seek and drop the highest-rate participating tunnel,
      keeping a fixed+random grace period between such drops.
      It doesn't seem very effective, so disabled by default
      ("router.dropTunnelsOnOverload=true" to enable)
2006-05-14 04:52:44 +00:00
2ad5a6f907 2006-05-11 jrandom
* PRNG bugfix (thanks cervantes and Complication!)
2006-05-12 03:31:44 +00:00
0920462060 2006-05-09 Complication
* weekly news.xml update
2006-05-10 02:38:42 +00:00
870e94e184 * 2006-05-09 0.6.1.18 released
2006-05-09  jrandom
    * Further tunnel creation timeout revamp
2006-05-09 21:17:17 +00:00
6b0d507644 2006-05-07 Complication
* Fix problem whereby repeated calls to allowed() would make
      the 1-tunnel exception permit more than one concurrent build
2006-05-08 03:19:46 +00:00
70cf9e4ca7 2006-05-06 jrandom
* Readjust the tunnel creation timeouts to reject less but fail earlier,
      while tracking the extended timeout events.
2006-05-06 20:27:34 +00:00
2a3974c71d 2006-05-04 jrandom
* Short circuit a highly congested part of the stat logging unless its
      required (may or may not help with a synchronization issue reported by
      andreas)
2006-05-04 23:08:48 +00:00
46ac9292e8 2006-05-03 Complication
* Allow a single build attempt to proceed despite 1-minute overload
      only if the 1-second rate shows enough spare bandwidth
      (e.g. overload has already eased)
2006-05-03 11:13:26 +00:00
4307097472 2006-05-02 Complication
* Correct a misnamed property in SummaryHelper.java
      to avoid confusion
    * Make the maximum allowance of our own concurrent
      tunnel builds slightly adaptive: one concurrent build per 6 KB/s
      within the fixed range 2..10
    * While overloaded, try to avoid completely choking our own build attempts,
      instead prefer limiting them to 1
2006-05-03 04:30:26 +00:00
ed3fdaf4f1 2006-05-02 Complication
* Fixed URL in previous update, sorry
2006-05-03 02:11:06 +00:00
378a9a8f5c 2006-05-02 Complication
* Weekly news.xml update
2006-05-03 02:03:01 +00:00
4ef6180455 2006-05-01 jrandom
* Adjust the tunnel build timeouts to cut down on expirations, and
      increased the SSU connection establishment retransmission rate to
      something less glacial.
    * For the first 5 minutes of uptime, be less aggressive with tunnel
      exploration, opting for more reliable peers to start with.
2006-05-01 22:40:21 +00:00
d4970e23c0 2006-05-01 jrandom
* Fix for a netDb lookup race (thanks cervantes!)
2006-05-01 19:09:02 +00:00
0c9f165016 fix typos 2006-05-01 15:39:37 +00:00
be3a899ecb 2006-04-27 jrandom
* Avoid a race in the message reply registry (thanks cervantes!)
2006-04-28 00:31:20 +00:00
7a6a749004 2006-04-27 jrandom
* Fixed the tunnel expiration desync code (thanks Complication!)
2006-04-28 00:08:40 +00:00
17271ee3f0 2006-04-25 Complication
* weekly news.xml update
2006-04-26 02:30:05 +00:00
99bcfa90df 2006-04-24 Complication
* Update news.xml to reflect 0.6.1.17
2006-04-24 12:43:25 +00:00
eb36e993c1 * 2006-04-23 0.6.1.17 released 2006-04-23 21:06:12 +00:00
zzz
e5eca5fa45 zzz update 2006-04-22 20:37:21 +00:00
8cba2f4236 2006-04-19 jrandom
* Adjust how we pick high capacity peers to allow the inclusion of fast
      peers (the previous filter assumed an old usage pattern)
    * New set of stats to help track per-packet-type bandwidth usage better
    * Cut out the proactive tail drop from the SSU transport, for now
    * Reduce the frequency of tunnel build attempts while we're saturated
    * Don't drop tunnel requests as easily - prefer to explicitly reject them
2006-04-19 17:46:51 +00:00
40d5ed31ac 2006-04-15 Complication
* Update news.xml to reflect 0.6.1.16
2006-04-15 17:25:50 +00:00
181275fe35 * 2006-04-15 0.6.1.16 released 2006-04-15 07:58:12 +00:00
23d8c01ce7 2006-04-15 jrandom
* Adjust the proactive tunnel request dropping so we will reject what we
      can instead of dropping so much (but still dropping if we get too far
      overloaded)
2006-04-15 07:15:19 +00:00
de83944486 2006-04-14 jrandom
* 0 isn't very random
    * Adjust the tunnel drop to be more reasonable
2006-04-14 20:24:07 +00:00
90cd7ff23a 2006-04-14 jrandom
* -28.00230115311259 is not between 0 and 1 in any universe I know.
    * Made the bw-related tunnel join throttle much simpler
2006-04-14 18:04:11 +00:00
8d0a9b4ccd 2006-04-14 jrandom
* Make some more stats graphable, and allow some internal tweaking on the
      tunnel pairing for creation and testing.
2006-04-14 11:40:35 +00:00
230d4cd23f * 2006-04-13 0.6.1.15 released 2006-04-13 12:40:21 +00:00
e9b6fcc0a4 2006-04-12 jrandom
* Added a further failsafe against trying to queue up too many messages to
      a peer.
2006-04-13 04:22:06 +00:00
8fcb871409 2006-04-12 jrandom
* Watch out for failed syndie index fetches (thanks bar!)
2006-04-12 06:49:01 +00:00
83bef43fd5 2006-04-11 jrandom
* Throttling improvements on SSU - throttle all transmissions to a peer
      when we are retransmitting, not just retransmissions.  Also, if
      we're already retransmitting to a peer, probabalistically tail drop new
      messages targetting that peer, based on the estimated wait time before
      transmission.
    * Fixed the rounding error in the inbound tunnel drop probability.
2006-04-11 13:39:06 +00:00
b4fc6ca31b 2006-04-10 jrandom
* Include a combined send/receive graph (good idea cervantes!)
    * Proactively drop inbound tunnel requests probabalistically as the
      estimated queue time approaches our limit, rather than letting them all
      through up to that limit.
2006-04-10 05:37:28 +00:00
ab3f1b708d 2006-04-08 jrandom
* Stat summarization fix (removing the occational holes in the jrobin
      graphs)
2006-04-09 01:14:08 +00:00
c76402a160 2006-04-08 jrandom
* Process inbound tunnel requests more efficiently
    * Proactively drop inbound tunnel requests if the queue before we'd
      process it in is too long (dynamically adjusted by cpu load)
    * Adjust the tunnel rejection throttle to reject requeusts when we have to
      proactively drop too many requests.
    * Display the number of pending inbound tunnel join requests on the router
      console (as the "handle backlog")
    * Include a few more stats in the default set of graphs
2006-04-08 06:15:43 +00:00
a50c73aa5e 2006-04-06 jrandom
* Fix for a bug in the new irc ping/pong filter (thanks Complication!)
2006-04-07 01:26:32 +00:00
5aa66795d2 2006-04-06 jrandom
* Fixed a typo in the reply cleanup code
2006-04-06 10:33:44 +00:00
ac3c2d2b15 * 2006-04-05 0.6.1.14 released 2006-04-05 17:08:04 +00:00
072a45e5ce 2006-04-05 jrandom
* Cut down on the time that we allow a tunnel creation request to sit by
      without response, and reject tunnel creation requests that are lagged
      locally.  Also switch to a bounded FIFO instead of a LIFO
    * Threading tweaks for the message handling (thanks bar!)
    * Don't add addresses to syndie with blank names (thanks Complication!)
    * Further ban clearance
2006-04-05 04:40:00 +00:00
1ab14e52d2 2006-04-04 Complication
* weekly news.xml update
2006-04-05 03:06:00 +00:00
9a820961a2 2006-04-05 jrandom
* Fix during the ssu handshake to avoid an unnecessary failure on
      packet retransmission (thanks ripple!)
    * Fix during the SSU handshake to use the negotiated session key asap,
      rather than using the intro key for more than we should (thanks ripple!)
    * Fixes to the message reply registry (thanks Complication!)
    * More comprehensive syndie banning (for repeated pushes)
    * Publish the router's ballpark bandwidth limit (w/in a power of 2), for
      testing purposes
    * Put a floor back on the capacity threshold, so too many failing peers
      won't cause us to pick very bad peers (unless we have very few good
      ones)
    * Bugfix to cut down on peers using introducers unneessarily (thanks
      Complication!)
    * Reduced the default streaming lib message size to fit into a single
      tunnel message, rather than require 5 tunnel messages to be transferred
      without loss before recomposition.  This reduces throughput, but should
      increase reliability, at least for the time being.
    * Misc small bugfixes in the router (thanks all!)
    * More tweaking for Syndie's CSS (thanks Doubtful Salmon!)
2006-04-04 12:20:32 +00:00
764149aef3 2006-04-01 jrandom
* Take out the router watchdog's teeth (don't restart on leaseset failure)
    * Filter the IRC ping/pong messages, as some clients send unsafe
      information in them (thanks aardvax and dust!)
2006-04-03 10:07:22 +00:00
1b3ad31bff 2006-04-01 jrandom
* Take out the router watchdog's teeth (don't restart on leaseset failure)
2006-04-01 19:05:35 +00:00
15e6c27c04 2006-03-30 jrandom
* Substantially reduced the lock contention in the message registry (a
      major hotspot that can choke most threads).  Also reworked the locking
      so we don't need per-message timer events
    * No need to have additional per-peer message clearing, as they are
      either unregistered individually or expired.
    * Include some of the more transient tunnel throttling
2006-03-30 07:26:43 +00:00
8b707e569f 2006-03-28 Complication
* weekly news.xml update
2006-03-29 02:09:23 +00:00
e4c4b24c61 2006-03-26 Complication
* announce 0.6.1.3
2006-03-27 03:24:38 +00:00
031636e607 * 2006-03-26 0.6.1.13 released 2006-03-26 23:23:49 +00:00
b5c0d77c69 2006-03-25 jrandom
* Added a simple purge and ban of syndie authors, shown as the
      "Purge and ban" button on the addressbook for authors that are already
      on the ignore list.  All of their entries and metadata are deleted from
      the archive, and the are transparently filtered from any remote
      syndication (so no user on the syndie instance will pull any new posts
      from them)
    * More strict tunnel join throtting when congested
2006-03-25 23:50:48 +00:00
d489caa88c 2006-03-24 jrandom
* Try to desync tunnel building near startup (thanks Complication!)
    * If we are highly congested, fall back on only querying the floodfill
      netDb peers, and only storing to those peers too
    * Cleaned up the floodfill-only queries
2006-03-24 20:53:28 +00:00
2a24029acf 2006-03-21 Complication
* Weekly news.xml update
2006-03-22 02:15:13 +00:00
c5aab8c750 2006-03-21 jrandom
* Avoid a very strange (unconfirmed) bug that people using the systray's
      browser picker dialog could cause by disabling the GUI-based browser
      picker.
    * Cut down on subsequent streaming lib reset packets transmitted
    * Use a larger MTU more often
    * Allow netDb searches to query shitlisted peers, as the queries are
      indirect.
    * Add an option to disable non-floodfill netDb searches (non-floodfill
      searches are used by default, but can be disabled by adding
      netDb.floodfillOnly=true to the advanced config)
2006-03-21 23:11:32 +00:00
343748111a 2006-03-20 jrandom
* Fix to allow for some slack when coalescing stats
    * Workaround some oddball errors
2006-03-20 05:39:54 +00:00
c5ddfabfe9 2006-03-20 jrandom
* Fix to allow for some slack when coalescing stats
    * Workaround some oddball errors
2006-03-20 05:31:09 +00:00
1ef33906ed 2006-03-18 jrandom
* Added a new graphs.jsp page to show all of the stats being harvested
2006-03-19 00:23:23 +00:00
f3849a22ad 2006-03-18 jrandom
* Made the netDb search load limitations a little less stringent
    * Add support for specifying the number of periods to be plotted on the
      graphs - e.g. to plot only the last hour of a stat that is averaged at
      the 60 second period, add &periodCount=60
2006-03-18 23:09:35 +00:00
b03ff21d3b 2006-03-17 jrandom
* Add support for graphing the event count as well as the average stat
      value (done by adding &showEvents=true to the URL).  Also supports
      hiding the legend (&hideLegend=true), the grid (&hideGrid=true), and
      the title (&hideTitle=true).
    * Removed an unnecessary arbitrary filter on the profile organizer so we
      can pick high capacity and fast peers more appropriately
2006-03-17 23:46:00 +00:00
52094b10c9 aych tee emm ell smells 2006-03-16 22:37:57 +00:00
fc927efaa3 2006-03-16 jrandom
* Integrate basic hooks for jrobin (http://jrobin.org) into the router
      console.  Selected stats can be harvested automatically and fed into
      in-memory RRD databases, and those databases can be served up either as
      PNG images or as RRDtool compatible XML dumps (see oldstats.jsp for
      details).  A base set of stats are harvested by default, but an
      alternate list can be specified by setting the 'stat.summaries' list on
      the advanced config.  For instance:
      stat.summaries=bw.recvRate.60000,bw.sendRate.60000
    * HTML tweaking for the general config page (thanks void!)
    * Odd NPE fix (thanks Complication!)
2006-03-16 21:52:09 +00:00
65dc803fb7 2006-03-16 jrandom
* Integrate basic hooks for jrobin (http://jrobin.org) into the router
      console.  Selected stats can be harvested automatically and fed into
      in-memory RRD databases, and those databases can be served up either as
      PNG images or as RRDtool compatible XML dumps (see oldstats.jsp for
      details).  A base set of stats are harvested by default, but an
      alternate list can be specified by setting the 'stat.summaries' list on
      the advanced config.  For instance:
      stat.summaries=bw.recvRate.60000,bw.sendRate.60000
    * HTML tweaking for the general config page (thanks void!)
    * Odd NPE fix (thanks Complication!)
2006-03-16 21:45:17 +00:00
349adf6690 2006-03-15 Complication
* Trim out an old, inactive IP second-guessing method
      (thanks for spotting, Anonymous!)
2006-03-16 00:49:22 +00:00
2c843fd818 2006-03-15 jrandom
* Further stat cleanup
    * Keep track of how many peers we are actively trying to communicate with,
      beyond those who are just trying to communicate with us.
    * Further router tunnel participation throttle revisions to avoid spurious
      rejections
    * Rate stat display cleanup (thanks ripple!)
    * Don't even try to send messages that have been queued too long
2006-03-15 22:36:10 +00:00
863b511cde 2006-03-15 jrandom
* Further stat cleanup
    * Keep track of how many peers we are actively trying to communicate with,
      beyond those who are just trying to communicate with us.
    * Further router tunnel participation throttle revisions to avoid spurious
      rejections
    * Rate stat display cleanup (thanks ripple!)
    * Don't even try to send messages that have been queued too long
2006-03-15 22:26:42 +00:00
zzz
c417e7c237 2006-03-14 zzz update 2006-03-15 06:02:07 +00:00
zzz
1822c0d7d8 2006-03-07 zzz update 2006-03-09 02:19:42 +00:00
zzz
94c1c32b51 2006-03-05 zzz
* Remove the +++--- from the logs on i2psnark startup
2006-03-06 01:57:47 +00:00
deb35f4af4 2006-03-05 jrandom
* HTML fixes in Syndie to work better with opera (thanks shaklen!)
    * Give netDb lookups to floodfill peers more time, as they are much more
      likely to succeed (thereby cutting down on the unnecessary netDb
      searches outside the floodfill set)
    * Fix to the SSU IP detection code so we won't use introducers when we
      don't need them (thanks Complication!)
    * Add a brief shitlist to i2psnark so it doesn't keep on trying to reach
      peers given to it
    * Don't let netDb searches wander across too many peers
    * Don't use the 1s bandwidth usage in the tunnel participation throttle,
      as its too volatile to have much meaning.
    * Don't bork if a Syndie post is missing an entry.sml
2006-03-05 17:07:07 +00:00
883150f943 2006-03-05 Complication
* Reduce exposed statistical information,
      to make build and uptime tracking more expensive
2006-03-05 07:44:59 +00:00
717d1b97b2 2006-03-04 Complication
* Fix the announce URL of orion's tracker in Snark sources
2006-03-04 23:50:01 +00:00
e62135eacc 2006-03-03 Complication
* Explicit check for an index out of bounds exception while parsing
      an inbound IRC command (implicit check was there already)
2006-03-04 03:04:06 +00:00
2c6d953359 2006-03-01 jrandom
* More aggressive tunnel throttling as we approach our bandwidth limit,
      and throttle based off periods wider than 1 second.
    * Included Doubtful Salmon's syndie stylings (thanks!)
2006-03-01 23:01:20 +00:00
zzz
2b79e2df3f 2006-02-28 zzz update 2006-03-01 04:11:16 +00:00
zzz
fab6e421b8 2006-02-27 zzz
* Update error page templates to add \r, Connection: close, and
      Proxy-connection: close.
2006-02-28 03:55:18 +00:00
589cbd675a * 2006-02-27 0.6.1.12 released
2006-02-27  jrandom
    * Adjust the jbigi.jar to use the athlon-optimized jbigi on windows/amd64
      machines, rather than the generic jbigi (until we have an athlon64
      optimized version)
2006-02-27 19:05:40 +00:00
c486f5980a * 2006-02-27 0.6.1.12 released
2006-02-27  jrandom
    * Adjust the jbigi.jar to use the athlon-optimized jbigi on windows/amd64
      machines, rather than the generic jbigi (until we have an athlon64
      optimized version)
2006-02-27 18:51:31 +00:00
eee21aa301 2006-02-26 jrandom
* Switch from the bouncycastle to the gnu-crypto implementation for
      SHA256, as benchmarks show a 10-30% speedup.
    * Removed some unnecessary object caches
    * Don't close i2psnark streams prematurely
2006-02-26 21:30:56 +00:00
zzz
a2854cf6f6 2006-02-25 zzz spelling fix 2006-02-25 21:51:46 +00:00
62b7cf64da 2006-02-25 jrandom
* Made the Syndie permalinks in the thread view point to the blog view
    * Disabled TCP again (since the live net seems to be doing well w/out it)
    * Fix the message time on inbound SSU establishment (thanks zzz!)
    * Don't be so aggressive with parallel tunnel creation when a tunnel pool
      just starts up
2006-02-25 20:41:51 +00:00
7b2a435aad 2006-02-24 jrandom
* Rounding calculation cleanup in the stats, and avoid an uncontested
      mutex (thanks ripple!)
    * SSU handshake cleanup to help force incompatible peers to stop nagging
      us by both not giving them an updated reference to us and by dropping
      future handshake packets from them.
2006-02-24 09:35:52 +00:00
3d8d21e543 2006-02-23 jrandom
* Increase the SSU retransmit ceiling (for slow links)
    * Estimate the sender's SSU MTU (to help see if we agree)
2006-02-23 14:38:39 +00:00
8b7958cff2 2006-02-22 jrandom
* Fix to properly profile tunnel joins (thanks Ragnarok, frosk, et al!)
    * More aggressive poor-man's PMTU, allowing larger MTUs on less reliable
      links
    * Further class validator refactorings
2006-02-23 08:08:37 +00:00
7bb792836d 2006-02-22 jrandom
* Fix to properly profile tunnel joins (thanks Ragnarok, frosk, et al!)
    * More aggressive poor-man's PMTU, allowing larger MTUs on less reliable
      links
    * Further class validator refactorings
2006-02-23 01:48:47 +00:00
03f509ca54 2006-02-22 jrandom
* Handle a rare race under high bandwidth situations in the SSU transport
    * Minor refactoring so we don't confuse sun's 1.6.0-b2 validator
2006-02-22 14:54:22 +00:00
5f05631936 2006-02-21 Complication
* Reactivate TCP tranport by default, in addition to re-allowing
2006-02-22 06:19:19 +00:00
zzz
5cfedd4c8b 2006-02-21 zzz update 2006-02-22 03:34:02 +00:00
zzz
269fec64a5 2006-02-21 zzz
announce 0.6.1.11
2006-02-21 20:12:14 +00:00
f63c6f4717 * 2006-02-21 0.6.1.11 released 2006-02-21 15:20:17 +00:00
b4c495531a 2006-02-21 jrandom
* Throttle the outbound SSU establishment queue, so it doesn't fill up the
      heap when backlogged (and so that the messages queued up on it don't sit
      there forever)
    * Further SSU memory cleanup
    * Clean up the address regeneration code so it knows when to rebuild the
      local info more precisely.
2006-02-21 13:31:18 +00:00
9990126e3e 2006-02-20 jrandom
* Throttle the outbound SSU establishment queue, so it doesn't fill up the
      heap when backlogged (and so that the messages queued up on it don't sit
      there forever)
    * Further SSU memory cleanup
2006-02-21 02:02:48 +00:00
ac8436a8eb 2006-02-20 jrandom
* Properly enable TCP this time (oops)
    * Deal with multiple form handlers on the same page in the console without
      being too annoying (thanks blubb and bd_!)
2006-02-20 18:12:47 +00:00
dee79dfb1c 2006-02-20 jrandom
* Reenable the TCP transport as a fallback (we'll continue to muck with
      debugging SSU-only elsewhere)
2006-02-20 16:42:29 +00:00
9b4e6f475d no need to include this stuff in the updates (they havent changed) 2006-02-20 14:28:09 +00:00
7672ba23d1 2006-02-20 jrandom
* Major SSU and router tuning to reduce contention, memory usage, and GC
      churn.  There are still issues to be worked out, but this should be a
      substantial improvement.
    * Modified the optional netDb harvester task to support choosing whether
      to use (non-anonymous) direct connections or (anonymous) exploratory
      tunnels to do the harvesting.  Harvesting itself is enabled via the
      advanced config "netDb.shouldHarvest=true" (default is false) and the
      connection type can be chosen via "netDb.harvestDirectly=false" (default
      is false).
2006-02-20 14:27:38 +00:00
4b77ddedcc 2006-02-20 jrandom
* Major SSU and router tuning to reduce contention, memory usage, and GC
      churn.  There are still issues to be worked out, but this should be a
      substantial improvement.
    * Modified the optional netDb harvester task to support choosing whether
      to use (non-anonymous) direct connections or (anonymous) exploratory
      tunnels to do the harvesting.  Harvesting itself is enabled via the
      advanced config "netDb.shouldHarvest=true" (default is false) and the
      connection type can be chosen via "netDb.harvestDirectly=false" (default
      is false).
2006-02-20 14:19:52 +00:00
222af6c090 * Added pruning of suckers history (it used to grow indefinitely). 2006-02-20 05:08:59 +00:00
8e879cb646 2006-02-19 jrandom
* Moved the current net's reseed URL to a different location than where
      the old net looks (dev.i2p.net/i2pdb2/ vs .../i2pdb/)
    * More aggressively expire inbound messages (on receive, not just on send)
    * Add in a hook for breaking backwards compatibility in the SSU wire
      protocol directly by including a version as part of the handshake.  The
      version is currently set to 0, however, so the wire protocol from this
      build is compatible with all earlier SSU implementations.
    * Increased the number of complete message readers, cutting down
      substantially on the delay processing inbound messages.
    * Delete the message history file on startup
    * Reworked the restart/shutdown display on the console (thanks bd_!)
2006-02-19 12:40:03 +00:00
65975df1be 2006-02-19 jrandom
* Moved the current net's reseed URL to a different location than where
      the old net looks (dev.i2p.net/i2pdb2/ vs .../i2pdb/)
    * More aggressively expire inbound messages (on receive, not just on send)
    * Add in a hook for breaking backwards compatibility in the SSU wire
      protocol directly by including a version as part of the handshake.  The
      version is currently set to 0, however, so the wire protocol from this
      build is compatible with all earlier SSU implementations.
    * Increased the number of complete message readers, cutting down
      substantially on the delay processing inbound messages.
    * Delete the message history file on startup
    * Reworked the restart/shutdown display on the console (thanks bd_!)
2006-02-19 12:29:57 +00:00
c94de2fbb5 2006-02-18 jrandom
* Migrate the outbound packets from a central component to the individual
      per-peer components, substantially cutting down on lock contention when
      dealing with higher degrees.
    * Load balance the outbound SSU transfers evenly across peers, rather than
      across messages (so peers with few messages won't be starved by peers
      with many).
    * Reduce the frequency of router info rebuilds (thanks bar!)
2006-02-19 03:58:08 +00:00
5aa335740a 2006-02-18 jrandom
* Migrate the outbound packets from a central component to the individual
      per-peer components, substantially cutting down on lock contention when
      dealing with higher degrees.
    * Load balance the outbound SSU transfers evenly across peers, rather than
      across messages (so peers with few messages won't be starved by peers
      with many).
    * Reduce the frequency of router info rebuilds (thanks bar!)
2006-02-19 03:22:31 +00:00
1202751359 2006-02-18 jrandom
* Add a new AIMD throttle in SSU to control the number of concurrent
      messages being sent to a given peer, in addition to the throttle on the
      number of concurrent bytes to that peer.
    * Adjust the existing SSU outbound queue to throttle based on the queue's
      lag, not an arbitrary number of packets.
2006-02-18 06:48:48 +00:00
34fcf53d87 2006-02-18 jrandom
* Add a new AIMD throttle in SSU to control the number of concurrent
      messages being sent to a given peer, in addition to the throttle on the
      number of concurrent bytes to that peer.
    * Adjust the existing SSU outbound queue to throttle based on the queue's
      lag, not an arbitrary number of packets.
2006-02-18 06:38:30 +00:00
9ddc632b9f 2006-02-17 jrandom
* Properly fix the build request queue throttling, using queue age to
      detect congestion, rather than queue size.
2006-02-17 22:29:28 +00:00
941b65eb32 2006-02-17 jrandom
* Disable the message history log file by default (duh - feel free to
      delete messageHistory.txt after upgrading.  thanks deathfatty!)
    * Limit the size of the inbound tunnel build request queue so we don't
      get an insane backlog of requests that we're bound to reject, and adjust
      the queue processing so we keep on churning through them when we've got
      a backlog.
    * Small fixes for the multiuser syndie operation (thanks Complication!)
    * Renamed modified PRNG classes that were imported from gnu-crypto so we
      don't conflict with JVMs using that as a JCE provider (thanks blx!)
2006-02-17 09:16:00 +00:00
8c9167464b 2006-02-17 jrandom
* Disable the message history log file by default (duh - feel free to
      delete messageHistory.txt after upgrading.  thanks deathfatty!)
    * Limit the size of the inbound tunnel build request queue so we don't
      get an insane backlog of requests that we're bound to reject, and adjust
      the queue processing so we keep on churning through them when we've got
      a backlog.
    * Small fixes for the multiuser syndie operation (thanks Complication!)
    * Renamed modified PRNG classes that were imported from gnu-crypto so we
      don't conflict with JVMs using that as a JCE provider (thanks blx!)
2006-02-17 09:07:53 +00:00
5b94965983 * 2006-02-16 0.6.1.10 released 2006-02-16 20:44:07 +00:00
3226ea5bf6 (1.188) added downloads.legion.i2p, politguy.i2p, ninja.i2p 2006-02-16 20:24:20 +00:00
84a24784e4 so old its not funny 2006-02-16 11:30:46 +00:00
71d3fa6b8c disable by default 2006-02-16 11:20:32 +00:00
9e00dbaafd 2006-02-16 jrandom
* Add a new toggle to the web config to enable/disable the load testing
2006-02-16 10:33:29 +00:00
2e9e0c64d4 2006-02-16 jrandom
* Dropped much of the abandonware from the apps/ directory
2006-02-16 09:45:22 +00:00
fde3f1ce7d not used 2006-02-16 09:40:12 +00:00
321c560648 drop most of the abandonware 2006-02-16 09:33:53 +00:00
d2ddca7d64 *cough* 2006-02-16 09:13:48 +00:00
fb17e70f12 2006-02-16 jrandom
* Bugfix to the I2PTunnel web config to properly accept i2cp port settings
    * Initial sucker refactoring to simplify reuse of the html parsing
    * Beginnings of hooks to push imported rss/atom out to remote syndie
      archives automatically (though not enabled currently)
    * Further SSU peer test cleanup
2006-02-16 08:36:22 +00:00
79f934fe17 2006-02-16 jrandom
* Bugfix to the I2PTunnel web config to properly accept i2cp port settings
    * Initial sucker refactoring to simplify reuse of the html parsing
    * Beginnings of hooks to push imported rss/atom out to remote syndie
      archives automatically (though not enabled currently)
    * Further SSU peer test cleanup
2006-02-16 08:24:07 +00:00
3d76df6af3 thanks zzz 2006-02-16 06:16:37 +00:00
zzz
d5c36f7f4e 2006-02-15 zzz fix release # 2006-02-15 17:56:59 +00:00
zzz
41ac628744 2006-02-15 zzz update 2006-02-15 17:49:40 +00:00
41e5e1a094 "&amp;amp;" sucks 2006-02-15 14:16:16 +00:00
74edc3fa7d 2006-02-15 jrandom
* Add in per-blog RSS feeds to Syndie
    * Upgraded sucker's ROME dependency to 0.8, bundling sucked enclosures
      with the posts, marking additional attachments as Media RSS enclosures
      (http://search.yahoo.com/mrss/), since RSS only supports one enclosure
      per item.
    * Don't allow the default syndie user to be set to something invalid if
      its in single user mode.
2006-02-15 13:42:20 +00:00
3a26218b5a just for clarity 2006-02-15 13:41:47 +00:00
687abd9427 2006-02-15 jrandom
* Add in per-blog RSS feeds to Syndie
    * Upgraded sucker's ROME dependency to 0.8, bundling sucked enclosures
      with the posts, marking additional attachments as Media RSS enclosures
      (http://search.yahoo.com/mrss/), since RSS only supports one enclosure
      per item.
    * Don't allow the default syndie user to be set to something invalid if
      its in single user mode.
2006-02-15 13:36:28 +00:00
113fbc1df3 2006-02-15 jrandom
* Merged in the i2p_0_6_1_10_PRE branch to the trunk, so CVS HEAD is no
      longer backwards compatible (and should not be used until 0.6.1.1 is
      out)
2006-02-15 05:33:17 +00:00
zzz
1374ea0ea1 2006-02-07 zzz update 2006-02-08 05:59:10 +00:00
zzz
424e55d3b2 2006-02-01 (zzz) 1-31 update 2006-02-02 02:59:43 +00:00
fde4f579f4 file BuildRequestor.java was initially added on branch i2p_0_6_1_10_PRE. 2006-02-02 01:32:33 +00:00
2d651a41f0 2006-01-25 jrandom
* Run the peer profile coalescing/reorganization outside the job queue
      (on one of the timers), to cut down on some job queue congestion.  Also,
      trim old profiles while running, not just when starting up.
    * Slightly more sane intra-floodfill-node netDb activity (only flood new
      entries)
    * Workaround in the I2PTunnelHTTPServer for some bad requests (though the
      source of the bug is not yet addressed)
    * Better I2PSnark reconnection handling
    * Further cleanup in the new tunnel build process
    * Make sure we expire old participants properly
    * Remove much of the transient overload throttling (it wasn't using a good
      metric)
2006-01-26 04:47:12 +00:00
1eebd5463f 2006-01-25 jrandom
* Run the peer profile coalescing/reorganization outside the job queue
      (on one of the timers), to cut down on some job queue congestion.  Also,
      trim old profiles while running, not just when starting up.
    * Slightly more sane intra-floodfill-node netDb activity (only flood new
      entries)
    * Workaround in the I2PTunnelHTTPServer for some bad requests (though the
      source of the bug is not yet addressed)
    * Better I2PSnark reconnection handling
    * Further cleanup in the new tunnel build process
    * Make sure we expire old participants properly
    * Remove much of the transient overload throttling (it wasn't using a good
      metric)
2006-01-26 04:41:44 +00:00
f22601b477 2006-01-25 jrandom
* Run the peer profile coalescing/reorganization outside the job queue
      (on one of the timers), to cut down on some job queue congestion.  Also,
      trim old profiles while running, not just when starting up.
    * Slightly more sane intra-floodfill-node netDb activity (only flood new
      entries)
    * Workaround in the I2PTunnelHTTPServer for some bad requests (though the
      source of the bug is not yet addressed)
    * Better I2PSnark reconnection handling
    * Further cleanup in the new tunnel build process
    * Make sure we expire old participants properly
    * Remove much of the transient overload throttling (it wasn't using a good
      metric)
2006-01-26 04:28:15 +00:00
ab8e11657f 2006-01-25 dust
* Fix IRC client proxy to use ISO-8859-1.
2006-01-25 15:34:28 +00:00
zzz
17eb7fa983 2006-1-24 zzz update 2006-01-25 07:43:02 +00:00
d1134f9704 added hidden.i2p, bk1k.i2p, antipiracyagency.i2p 2006-01-23 15:29:36 +00:00
13fe45b489 2006-01-22 jrandom
* New tunnel build process - does not use the new crypto or new peer
      selection strategies.  However, it does drop the fallback tunnel
      procedure, except for tunnels who are configured to allow them, or for
      the exploratory pool during bootstrapping or after a catastrophic
      failure.  This new process prefers to fail rather than use too-short
      tunnels, so while it can do some pretty aggressive tunnel rebuilding,
      it may expose more tunnel failures to the user.
    * Always prefer normal tunnels to fallback tunnels.
    * Potential fix for a bug while changing i2cp settings on I2PSnark (thanks
      bar!)
    * Do all of the netDb entry writing in a separate thread, avoiding
      duplicates and batching them up.
2006-01-23 00:51:54 +00:00
cd235e5902 2006-01-19 Complication
* Explain better where eepsite's destkey can be found
2006-01-20 04:40:24 +00:00
b727d868fb 2006-01-18 cervantes
* Add title attributes to all external links in Syndie, so we can rollover
      and quickly see if it's worth clicking on.
    * Fixed a minor compiler warning.
2006-01-18 06:37:52 +00:00
zzz
cd2609b34a (zzz) 1-17 update 2006-01-18 03:45:37 +00:00
a12ede096a 2006-01-17 jrandom
* First pass of the new tunnel creation crypto, specified in the new
      router/doc/tunnel-alt-creation.html (referenced in the current
      router/doc/tunnel-alt.html).  It isn't actually used anywhere yet, other
      than in the test code, but the code verifies the technical viability, so
      further scrutiny would be warranted.
2006-01-17 22:56:15 +00:00
eb3442823e 2006-01-16 cervantes
* Dragged I2P kicking and screaming into 2006 (Oops)
2006-01-16 22:00:04 +00:00
211f37c207 2005-01-14 cervantes
* Removed entirely misleading memory status from the console summary.
2006-01-14 20:14:47 +00:00
d60d0923c0 2005-01-13 cervantes
* Further Syndie layout hardening and typeface balancing.
2006-01-13 06:27:43 +00:00
zzz
0448203ede (zzz) spelling fix and tweaks 2006-01-13 02:51:27 +00:00
00c97dbf92 0.6.1.9 2006-01-13 01:13:05 +00:00
d07342e3e3 * 2005-01-12 0.6.1.9 released 2006-01-12 21:18:38 +00:00
205d8de281 2005-01-12 jrandom
* Only create the loadtest.log if requested to do so (thanks zzz!)
    * Make sure we cleanly take into consideration the appropriate data
      points when filtering out duplicate messages in the message validator,
      and report the right bloom filter false positives rate (not used for
      anything except debugging)
2006-01-12 19:35:54 +00:00
a638301b5c 2005-01-12 cervantes
* Syndie CSS tweaks to removed some redundant declarations, improve font
      scaling and layout robustness. Improved cross browser compatibility
      (in other words "kicked IE"). Tightened the look of the blog template
      a little.
2006-01-12 09:59:55 +00:00
4f51ad492b 2005-01-11 Complication
* CSS comment fixes
2006-01-11 23:19:38 +00:00
zzz
c3a9f72d41 (zzz) 1-10-06 update 2006-01-11 23:12:31 +00:00
79476d3609 2005-01-11 jrandom
* Include the attachments/blogs/etc for comments on the blog view
    * Syndie HTML fixes (thanks cervantes!)
    * Make sure we fully reset the objects going into our cache before we
      reuse them (thanks zzz!)
2006-01-11 20:32:34 +00:00
zzz
97a206fcda (zzz) add step-by-step eepsite announce instructions 2006-01-11 05:31:05 +00:00
f783e65a4c added decadence.i2p, freedomarchives.i2p, closedshop.i2p 2006-01-10 20:53:10 +00:00
dbd1f65864 2005-01-10 jrandom
* Added the per-post list of attachments/blogs/etc to the blog view in
      Syndie (though this does not yet include comments or some further
      refinements)
    * Have the I2P shortcut launch i2p.exe instead of i2psvc.exe on windows,
      removing the dox box (though also removes the restart functionality...)
    * Give the i2p.exe the correct java.library.path to support the systray
      dll (thanks Bobcat, Sugadude, anon!)
2006-01-10 06:59:07 +00:00
5c78d8108f 2005-01-09 jrandom
* Removed a longstanding bug that had caused unnecessary router identity
      churn due to clock skew
    * Temporarily sanity check within the streaming lib for long pending
      writes
    * Added support for a blog-wide logo to Syndie, and automated the pushing
      of updated extended blog info data along side the metadata.
2006-01-10 02:12:54 +00:00
1b273bdf43 2005-01-09 jrandom
* Removed a longstanding bug that had caused unnecessary router identity
      churn due to clock skew
    * Temporarily sanity check within the streaming lib for long pending
      writes
    * Added support for a blog-wide logo to Syndie, and automated the pushing
      of updated extended blog info data along side the metadata.
2006-01-09 23:18:15 +00:00
934f4082f1 2005-01-09 jrandom
* Removed a longstanding bug that had caused unnecessary router identity
      churn due to clock skew
    * Temporarily sanity check within the streaming lib for long pending
      writes
    * Added support for a blog-wide logo to Syndie, and automated the pushing
      of updated extended blog info data along side the metadata.
2006-01-09 22:22:41 +00:00
002aed145f 2005-01-09 jrandom
* Bugfix for a rare SSU error (thanks cervantes!)
    * More progress on the blog interface, allowing customizable blog-wide
      links.
2006-01-09 07:02:01 +00:00
1ca27ffd39 2005-01-09 jrandom
* Bugfix for a rare SSU error (thanks cervantes!)
    * More progress on the blog interface, allowing customizable blog-wide
      links.
2006-01-09 06:33:29 +00:00
66e6dbec33 2006-01-08 jrandom
* First pass of the new blog interface, though without much of the useful
      customization features (coming soon)
2006-01-08 21:50:54 +00:00
894caaa63c 2006-01-08 jrandom
* First pass of the new blog interface, though without much of the useful
      customization features (coming soon)
2006-01-08 20:54:36 +00:00
zzz
97dae94b46 (zzz) update 2006-01-05 06:29:55 +00:00
c00488afeb 2006-01-04 jrandom
* Rather than profile individual tunnels for throughput over their
      lifetime, do so at 1 minute intervals (allowing less frequently active
      tunnels to be more fairly measured).
    * Run the live tunnel load test across two tunnels at a time, by default.
      The load test runs for a random period from 90s to the tunnel lifetime,
      self paced.  This should help gathering data for profiling peers that
      are in exploratory tunnels.
2006-01-03  jrandom
    * Calculate the overall peer throughput across the 3 fastest one minute
      tunnel throughput values, rather than the single fastest throughput.
    * Degrade the profiled throughput data over time (cutting the profiled
      peaks in half once a day, on average)
    * Enable yet another new speed calculation for profiling peers, using the
      peak throughput from individual tunnels that a peer is participating in,
      rather than across all tunnels they are participating in.  This helps
      gather a fairer peer throughput measurement, since it won't allow a slow
      high capacity peer seem to have a higher throughput (pushing a little
      data across many tunnels at once, as opposed to lots of data across a
      single tunnel).  This degrades over time like the other.
    * Add basic OS/2 support to the jbigi code (though we do not bundle a
      precompiled OS/2 library)
2006-01-05 02:48:13 +00:00
23723b56ca 2006-01-01 jrandom
* Disable multifile torrent creation in I2PSnark's web UI for the moment
      (though it can still seed and participate in multifile swarms)
    * Enable a new speed calculation for profiling peers, using their peak
      1 minute average tunnel throughput as their speed.
2006-01-01 17:23:26 +00:00
76f89ac93c 2005-12-31 jrandom
* Include a simple torrent creator in the I2PSnark web UI
    * Further streaming lib closing improvements
    * Refactored the load test components to run off live tunnels (though,
      still not safe for normal/anonymous load testing)
2005-12-31 23:40:20 +00:00
0f8611e465 2005-12-30 jrandom
* Close streams more gracefully
2005-12-30 23:33:52 +00:00
8e87ae08fb 2005-12-30 jrandom
* Small streaming lib bugfixes for the modified timeouts
    * Minor Syndie/Sucker RSS html fix
    * Small synchronization fix in I2PSnark (thanks fsm!)
2005-12-30 20:57:53 +00:00
5b1a6391f3 2005-12-30 jrandom
* Replaced the bundled linux jcpuid (written in C++) with scintilla's
      jcpuid (written in C), removing the libg++.so.5 dependency that has bit
      some distros (e.g. mandriva)
2005-12-30 18:16:46 +00:00
728f177473 2005-12-29 jrandom
* Minor fix to the new ERR-ClockSkew to deal with people whose clocks are
      actually correct
2005-12-29 13:07:22 +00:00
1d0d0d9c69 2005-12-27 jrandom
* Add a new Status: line on the router console - "ERR-ClockSkew", in case
      the clock is too skewed to do anything useful (check the year and month,
      not just the hour and minute).
    * Fixed the read/write timeouts in the streaming lib (so that it actually
      honors them now)
    * Minor I2PSnark cleanups (no read timeout, more careful shutdown and
      torrent closing)
    * Handle an oddball tunnel creation failure (thanks Xunk)
2005-12-27 13:20:50 +00:00
9b7e5d1817 2005-12-26 Complication
* Fix some integer typecasting in I2PSnark (caused >2GB torrents to fail)
    * HTML readability cosmetics on "Peers" page
2005-12-27 04:20:29 +00:00
dc0485b526 fix ugliness in release history of help.jsp
[yes, i am still alive *g*]
2005-12-23 04:36:31 +00:00
784d465d17 * 2005-12-22 0.6.1.8 released
2005-12-22  jrandom
    * Bundle the standalone I2PSnark launcher in the installer and update
      process (launch as "java -jar launch-i2psnark.jar", viewing the
      interface on http://localhost:8002/)
    * Don't autostart swarming torrents by default so that you can run a
      standalone I2PSnark from the I2P install dir and not have the embedded
      I2PSnark autolaunch the torrents that the standalone instance is running
    * Fixed a rare streaming lib bug that could let a blocking call wait
      forever.
2005-12-22 12:49:09 +00:00
327089c9d1 4matting 2005-12-22 10:10:39 +00:00
148dd99c86 2005-12-22 jrandom
* Cleaned up some buffer synchronization issues in I2PSnark that could
       cause blockage.
2005-12-22 10:04:12 +00:00
98277d3b64 2005-12-21 jrandom
* Adjusted I2PSnark's usage of the streaming lib (tweaking it for BT's
      behavior)
    * Fixed the I2PSnark bug that would lose track of live peers
2005-12-21 12:04:54 +00:00
702e5a5eab 2005-12-19 cervantes
* Silly RDF syntax
    * Oops...
2005-12-21 04:15:36 +00:00
54c91731c0 rdf updates for easier fire2pe/xul handling 2005-12-20 10:22:24 +00:00
3bfc109476 (cough. not often a problem though) 2005-12-20 02:29:09 +00:00
3989638f2d 2005-12-19 jrandom
* Fix for old Syndie blog bookmarks (thanks Complication!)
    * Fix for I2PSnark to accept incoming connections again (oops)
    * Randomize the order that peers from the tracker are contacted
2005-12-20 02:01:37 +00:00
4a65fd4f46 2005-12-19 jrandom
* I2PSnark logging, disconnect old inactive peers rather than new ones,
      memory usage reduction, better OOM handling, and a shared connection
      acceptor.
    * Cleaned up the Syndie blog page and the resulting filters (viewing a
      blog from the blog page shows threads started by the selected author,
      not those that they merely participate in)
2005-12-19 13:34:52 +00:00
d525c49d45 2005-12-18 jrandom
* Added a standalone runner for the I2PSnark web ui (build with the
      command "ant i2psnark", unzip i2psnark-standalone.zip somewhere, run
      with "java -jar launch-i2psnark.jar", and go to http://localhost:8002/).
    * Further I2PSnark error handling
2005-12-17  jrandom
    * Let multiuser accounts authorize themselves to access the remote
      functionality again (thanks Ch0Hag!)
    * Adjust the JVM heap size to 128MB for new installs (existing users can
      accomplish this by editing wrapper.config, adding the line
      "wrapper.java.maxmemory=128", and then doing a full shutdown and startup
      of the router).  This is relevent for heavy usage of I2PSnark in the
      router console.
2005-12-18 05:52:19 +00:00
c287bace0f 2005-12-18 jrandom
* Added a standalone runner for the I2PSnark web ui (build with the
      command "ant i2psnark", unzip i2psnark-standalone.zip somewhere, run
      with "java -jar launch-i2psnark.jar", and go to http://localhost:8002/).
    * Further I2PSnark error handling
2005-12-18 05:39:52 +00:00
ee0951b5b2 2005-12-17 jrandom
* Use our faster SHA1, rather than the JVM's within I2PSnark, and let
      'piece' sizes grow larger than before.
2005-12-17 09:22:07 +00:00
1eb3ae5e1b 2005-12-16 jrandom
* Added some I2PSnark sanity checks, an OOMListener when running
      standalone, and a guard against keeping memory tied up indefinitely.
    * Sanity check on the watchdog (thanks zzz!)
    * Handle invalid HTTP requests in I2PTunnel a little better
2005-12-17 03:47:02 +00:00
7d234b1978 2005-12-16 jrandom
* Moved I2PSnark from using Threads to I2PThreads, so we handle OOMs
      properly (thanks Complication!)
    * More guards in I2PSnark for zany behavior (I2PSession recon w/ skew,
      b0rking in the DirMonitor, etc)
2005-12-16 23:18:56 +00:00
6f424fa751 2005-12-16 jrandom
* Try to run a torrent in readonly mode if we can't write to the file, and
      handle failures a little more gracefully (thanks polecat!)
2005-12-16 11:01:20 +00:00
7726bd1a5c 2005-12-16 jrandom
* Refuse torrents with too many files (128), avoiding ulimit errors.
    * Remove an fd leak in I2PSnark
    * Further I2PSnark web UI cleanup
2005-12-16 08:24:21 +00:00
2a922098d6 refresh link (good idea Complication) 2005-12-16 04:01:05 +00:00
3ec92c8b62 2005-12-15 jrandom
* Added a first pass to the I2PSnark web UI (see /i2psnark/)
2005-12-16 03:00:48 +00:00
b37bb9372e 2005-12-15 jrandom
* Added multitorrent support to I2PSnark, accessible currently by running
      "i2psnark.jar --config i2psnark.config" (which may or may not exist).
      It then joins the swarm for any torrents in ./i2psnark/*.torrent, saving
      their data in that directory as well.  Removing the .torrent file stops
      participation, and it is currently set to seed indefinitely.  Completion
      is logged to the logger and standard output, with further UI interaction
      left to the (work in progress) web UI.
2005-12-15 08:58:30 +00:00
369b6930e5 2005-12-14 jrandom
* Fix to drop peer references when we shitlist people again (thanks zzz!)
    * Further I2PSnark fixes to deal with arbitrary torrent info attributes
      (thanks Complication!)
2005-12-14 09:32:50 +00:00
5033a22a9b 2005-12-13 zzz
* Don't test tunnels expiring within 90 seconds
    * Defer Test Tunnel jobs if job lag too large
    * Use JobQueue.getMaxLag() rather than the jobQueue.jobLag stat to measure
      job lag for tunnel build backoff, allowing for more agile handling
      (since the stat is only updated once a minute)
    * Use tunnel length override if all tunnels are expiring within one
      minute.
2005-12-13 21:56:41 +00:00
6e5114a4c2 multihop load tests (testing everyone with a fixed set of fast other peers).
not for general purpose use, so you can probably safely ignore this code, but its
useful for testing
2005-12-13 21:32:50 +00:00
77c818a0b1 toolbar.html isn't in cvs yet (polecat - please check this diff :) 2005-12-13 09:43:41 +00:00
7ac673cea4 2005-12-13 jrandom
* Fixed I2PSnark's handling of some torrent files to deal with those
      created by Azureus and I2PRufus (it didn't know how to deal with
      additional meta info, such as path.utf-8 or name.utf-8).
2005-12-13 09:38:51 +00:00
e5fa7e0ae4 Navbar is now customizeable via docs/toolbar.html. There is a default should that file not be there. And... wtf, didn't my syndie thumbnail patch take? Well, no conflicts reported so, here it goes again. 2005-12-13 08:18:59 +00:00
ab4f3008cb 2005-12-09 zzz
* Create different strategies for exploratory tunnels (which are difficult
      to create) and client tunnels (which are much easier)
    * Gradually increase number of parallel build attempts as tunnel expiry
      nears.
    * Temporarily shorten attempted build tunnel length if builds using
      configured tunnel length are unsuccessful
    * React more aggressively to tunnel failure than routine tunnel
      replacement
    * Make tunnel creation times randomized - there is existing code to
      randomize the tunnels but it isn't effective due to the tunnel creation
      strategy. Currently, most tunnels get built all at once, at about 2 1/2
      to 3 minutes before expiration. The patch fixes this by fixing the
      randomization, and by changing the overlap time (with old tunnels) to a
      range of 2 to 4 minutes.
    * Reduce number of excess tunnels. Lots of excess tunnels get created due
      to overlapping calls. Just about anything generated a call which could
      build many tunnels all at once, even if tunnel building was already in
      process.
    * Miscellaneous router console enhancements
2005-12-09 08:05:44 +00:00
f738a02760 link to the filtered blog, not to the current viewBlogs page 2005-12-08 21:01:04 +00:00
7d64ecb628 2005-12-08 jrandom
* Minor bugfix in SSU for dealing with corrupt packets
    * Added some hooks for load testing
2005-12-08 20:53:41 +00:00
7beacff028 2005-12-07 jrandom
* Added a first pass at a blog view in Syndie
2005-12-08 00:50:32 +00:00
952bcc696a 2005-12-07 jrandom
* Expand the thread we're viewing to its leaf
    * Bugfix on intraday ordering (children are always newer than parents)
2005-12-07 20:19:42 +00:00
19bba048f2 2005-12-05 jrandom
* Added an RDF and XML thread export to Syndie, reachable at
      .../threadnav/rdf or .../threadnav/xml, accepting the parameters
      count=$numThreads and offset=$threadIndex.  If the $numThreads is -1, it
      displays all threads.
2005-12-05 06:14:15 +00:00
5966fcf5ff 2005-12-04 TLorD
* Patch for the C SAM library to null terminate strings on copy (thanks!)
2005-12-04 20:12:19 +00:00
fbd7feee61 2005-12-04 jrandom
* Bugfix in Syndie for a problem in the threaded indexer (thanks CofE!)
    * Always include ourselves in the favorite authors (since we don't
      bookmark ourselves)
2005-12-04 20:02:24 +00:00
5faca98176 I am such a useless bitch. 2005-12-04 17:04:10 +00:00
99951bf815 Adding a schema for [link] to handle if you want to display links directly to your attachments within the context of the blog itself. Some redundant code here (3 files modified with cut & paste) so we may want to further abstract the External links: HTML generation code. 2005-12-04 13:55:27 +00:00
024b1a04e2 no message 2005-12-04 03:30:32 +00:00
a4cc18df76 2005-12-03 jrandom
* Use newgroup-like tags by default in Syndie's interface
2005-12-04 03:18:08 +00:00
fef578973a no message 2005-12-03 22:09:01 +00:00
35b75a7300 2005-12-03 jrandom
* Added support for a 'most recent posts' view that CofE requested, which
      includes the ability to filter by age (e.g. posts by your favorite
      authors in the last 5 days).
2005-12-03 19:03:44 +00:00
1c6c397913 2005-12-03 jrandom
* Adjusted Syndie to use the threaded view that cervantes suggested, which
      displays a a single thread path at a time - from root to leaf - rather
      than a depth first traversal.
2005-12-03 17:33:35 +00:00
c96965d364 2005-12-03 jrandom
* Package up a standalone Syndie install into a "syndie-standalone.zip",
      buildable with "ant syndie".  It extracts into ./syndie/, launches with
      "java -jar launchsyndie.jar" (or javaw, on windows, to avoid a dos box),
      running a single user Syndie instance (by default).  It also creates a
      default subscription to syndiemedia without any anonymity (using no
      proxy).  Upgrades can be done by just replacing the syndie.war with the
      one from I2P.
2005-12-03 05:41:25 +00:00
12900ca709 * 2005-12-01 0.6.1.7 released
2005-12-01  jrandom
    * Add a new criteria to the tunnel join throttle, backing off people if we
      are failing to talk to our peers more than usual.
2005-12-01 17:16:53 +00:00
f5b829a124 2005-11-30 jrandom
* Cleaned up the build process to deal with Jetty 5.1.6 and rename the
      new commons-logging-api.jar to commons-logging.jar, which it replaces.
      Jetty 5.1.6 is pushed with all updates.  Also, no need to push a
      separate jdom or rome, as they're inside syndie.war.
2005-12-01 01:13:44 +00:00
f62a6d3ce6 2005-11-30 jrandom
* Don't let the TCP transport alone shitlist a peer, since other
      transports may be working.  Also display whether TCP connections are
      inbound or outbound on the peers page.
    * Fixed some substantial bugs in the SSU introducers where we wouldn't
      talk to anyone who didn't expose an IP (even if they had introducers),
      among other goofy things.
    * When dealing with SSU introducers, send them all a packet at 3s/6s/9s,
      rather than sending one a packet at 3s, then another a packet at 6s,
      and a third a packet at 9s.
    * Fixed Syndie attachments (oops)
2005-11-30 20:48:25 +00:00
3d18bf870b 2005-11-29 zzz
* Added a link to orion's jump page on the 'key not found' error page.
2005-11-29 17:56:02 +00:00
d8071296eb 2005-11-29 jrandom
* Further Syndie UI cleanup
    * Bundled our patched MultiPartRequest code from jetty (APL2 licensed),
      since it hasn't been applied to the jetty CVS yet [1].  Its packaged
      into syndie.jar and renamed to net.i2p.syndie.web.MultiPartRequest, but
      will be removed as soon as its integrated into Jetty.  This patch allows
      posting content in various character sets.
      [1] http://article.gmane.org/gmane.comp.java.jetty.general/6031
    * Upgraded new installs to the latest stable jetty (5.1.6), though this
      isn't pushed as part of the update yet, as there aren't any critical
      bugs.
2005-11-29 16:58:01 +00:00
c66e3256aa 2005-11-29 jrandom
* Added back in the OSX jbigi, which was accidentally removed a few revs
      back (thanks for the bug report stoerte!)  New installs will get the
      full jbigi, or you can pull the jbigi.jar from CVS by going to
      http://dev.i2p.net/cgi-bin/cvsweb.cgi/i2p/installer/lib/jbigi/jbigi.jar
      and clicking on the first "download" link, saving that jbigi.jar to
      lib/jbigi.jar in your I2P installation directory.  After restarting your
      router, it should load up fine.
2005-11-29 12:46:34 +00:00
686742a67b 2005-11-27 jrandom
* Inlined the Syndie CSS to reduce the number of HTTP requests (and
      because firefox [and others?] delay rendering until they fetch the css).
    * Make sure we fire the shutdown tasks when regenerating a new identity
      (thanks picsou!)
    * Cleaned up some of the things I b0rked in the 'dynamic keys' mode
    * Don't drop SSU sessions if they're still transmitting data successfully,
      even if there are transmission failures
    * Adjusted the time summarization to display hours after 119m, not 90m
    * Further EepGet cleanup (grr)
2005-11-28 16:02:38 +00:00
cdf94295f3 (1.185) added TheBreton.i2p adab.i2p awup.i2p china.i2p davidkra.i2p
comwiz.i2p dust.i2p eepsites.i2p jmg.i2p kuroneko.i2p
               mywastedlife.i2p site.games.i2p squid2.i2p striker.i2p
               tracker.awup.i2p zzz.i2p
2005-11-26 18:32:22 +00:00
fbf1705c4e * 2005-11-26 0.6.1.6 released 2005-11-26 18:26:22 +00:00
453ecc4208 Further improvements, and works fine for ubergeeks, but not yet for normal
geeks (aka no router console)
2005-11-26 17:19:29 +00:00
d1f2b447ac 2005-11-26 jrandom
* Update the sorting in Syndie to consider children 'newer' than parents,
      even if they have the same message ID (duh)
    * Cleaned up some nav links in Syndie (good idea gloin, spaetz!)
    * Added a bunch of tooltips to Syndie's fields (thanks polecat!)
    * Force support for nonvalidating XML in Jetty (so we can handle GCJ/etc
      better)
2005-11-26 16:51:16 +00:00
70c4560f02 2005-11-26 jrandom
* Be more explicit about what messages we will handle through a client
      tunnel, and how we will handle them.  This cuts off a set of attacks
      that an active adversary could mount, though they're probably nonobvious
      and would require at least some sophistication.
2005-11-26 11:39:32 +00:00
9089fdd2d5 2005-11-26 Raccoon23
* Added support for 'dynamic keys' mode, where the router creates a new
      router identity whenever it detects a substantial change in its public
      address (read: SSU IP or port).  This only offers minimal additional
      protection against trivial attackers, but should provide functional
      improvement for people who have periodic IP changes, since their new
      router address would not be shitlisted while their old one would be.
    * Added further infrastructure for restricted route operation, but its use
      is not recommended.
2005-11-26 09:16:11 +00:00
ef82cc4f20 2005-11-25 jrandom
* Further Syndie UI cleanups
    * Logging cleanup
    * Fixed link to fproxy.tino.i2p (thanks zzz!)
2005-11-26 05:05:52 +00:00
f2c2a5b386 2005-11-25 jrandom
* Don't publish stats for periods we haven't reached yet (thanks zzz!)
    * Cleaned up the syndie threaded display to show the last updated date for
      a subthread, and to highlight threads updated in the last two days.
2005-11-25 11:06:00 +00:00
fc858bc950 replaced the old nonfunctional perl SAM lib with postman's new
implementation (thanks postman!)
2005-11-24 09:41:01 +00:00
dbb4b3d0c2 2005-11-24 jrandom
* Fix to save syndication settings in Syndie (thanks spaetz!)
2005-11-24 08:45:54 +00:00
2b841ad667 2005-11-23 jrandom
* Removed spurious streaming lib RTO increase (it wasn't helpful)
    * Streamlined the tunnel batching to schedule batch transmissions more
      appropriately.
    * Default tunnel pool variance to 2 +0-1 hops
2005-11-23 16:04:52 +00:00
5e094b43b3 2005-11-21 jrandom
* IE doesn't strip SPAN from <button> form fields, so add in a workaround
      within I2PTunnel.
    * Increase the maximum SSU retransmission timeout to accomodate slower or
      more congested links (though SSU's RTO calculation will usually use a
      much lower timeout)
    * Moved the streaming lib timed events off the main timer queues and onto
      a streaming lib specific set of timer queues.  Streaming lib timed
      events are more likely to have lock contention on the I2CP socket while
      other timed events in the router are (largely) independent.
    * Fixed a case sensitive lookup bug (thanks tino!)
    * Syndie cleanup - new edit form on the preview page, and fixed some blog
      links (thanks tino!)
2005-11-21 14:45:04 +00:00
33d57dd545 2005-11-21 jrandom
* IE doesn't strip SPAN from <button> form fields, so add in a workaround
      within I2PTunnel.
    * Increase the maximum SSU retransmission timeout to accomodate slower or
      more congested links (though SSU's RTO calculation will usually use a
      much lower timeout)
    * Moved the streaming lib timed events off the main timer queues and onto
      a streaming lib specific set of timer queues.  Streaming lib timed
      events are more likely to have lock contention on the I2CP socket while
      other timed events in the router are (largely) independent.
    * Fixed a case sensitive lookup bug (thanks tino!)
    * Syndie cleanup - new edit form on the preview page, and fixed some blog
      links (thanks tino!)
2005-11-21 14:37:06 +00:00
61f75b5f09 2005-11-19 jrandom
* Implemented a trivial pure java PMTU backoff strategy, switching between
      a 608 byte MTU and a 1350 byte MTU, depending upon retransmission rates.
    * Fixed new user registration in Syndie (thanks Complication!)
2005-11-20 04:42:16 +00:00
3f65e53592 2005-11-17 jrandom
* More cautious file handling in Syndie
2005-11-17 08:23:45 +00:00
99ae3ee459 2005-11-16 jrandom
* More aggressive I2PTunnel content encoding munging to work around some
      rare HTTP behavior (ignoring q values on Accept-encoding, using gzip
      even when only identity is specified, etc).  I2PTunnelHTTPServer now
      sends "Accept-encoding: \r\n" plus "X-Accept-encoding: x-i2p-gzip\r\n",
      and I2PTunnelHTTPServer handles x-i2p-gzip in either the Accept-encoding
      or X-Accept-encoding headers.  Eepsite operators who do not know to
      check for X-Accept-encoding will simply use the identity encoding.
2005-11-16 11:50:56 +00:00
f7236d7d58 * 2005-11-15 0.6.1.5 released 2005-11-16 03:20:20 +00:00
5182008b38 more stuff 2005-11-16 03:12:04 +00:00
9d030327e6 put rome and jdom in the syndie.war, and fix some deprecation warnings 2005-11-15 12:46:54 +00:00
a15c90d2cc add in a basic limit on the attachment size (trivially circumvented, but for now it'll do)
fixed some links
2005-11-15 11:17:04 +00:00
84c2a713e1 update the post 2005-11-15 10:23:29 +00:00
7db9ce6e5b update the default syndie post, and add in some html hooks for it
only allow 40 chars in the subject within the thread tree
2005-11-15 10:22:57 +00:00
da42f5717b logging cleanup 2005-11-15 06:38:00 +00:00
5241953e5d logging 2005-11-15 06:37:20 +00:00
b1a5f61ba2 cleanup and logging 2005-11-15 06:34:04 +00:00
024a5a1ad4 deal with spaces in filenames cleanly 2005-11-15 06:29:51 +00:00
b031de5404 more cleanup 2005-11-15 06:24:17 +00:00
dceac73951 2005-11-14 jrandom
* Migrate to the new Syndie interface
2005-11-15 00:45:36 +00:00
8f95143488 more syndie cleanup 2005-11-15 00:24:35 +00:00
a8f3043aae * move over the rssimport, and default it to the proxy @ localhost:4444 2005-11-14 05:40:18 +00:00
6c91b2d4a9 migrate to the new system 2005-11-14 02:09:23 +00:00
ddd438de35 protect against spoofing in syndie with a per-jvm-instance nonce (which should prevent the spurious "are you being spoofed" things)
fixed the syndie previewing
2005-11-13 14:15:26 +00:00
16fd46db2b move the admin page over to the new system 2005-11-13 11:04:17 +00:00
1159c155a4 * migrate the post/preview over to the new system
* honor the "force new thread" and "refuse replies (from everyone but me)" functionality
2005-11-13 07:55:38 +00:00
30b4e2aa2a change one's password 2005-11-13 04:47:15 +00:00
ae46fa2e6d allow the user to change their password 2005-11-13 04:46:28 +00:00
9fc34895c9 more careful ops 2005-11-13 03:46:20 +00:00
08be4c7b3d migrated the syndie addressbook 2005-11-13 03:42:52 +00:00
7443457af4 * Ignore <ul>. Add <strong>, <p> and <li>.
* change to make [ and ] use their &#n;.
2005-11-12 17:10:56 +00:00
979a4cfb69 migrate the switchuser and register 2005-11-12 13:01:32 +00:00
ed285871bf migrate the profile page to the new format, and refactor the new format's servlets 2005-11-12 12:14:23 +00:00
8c70b8b32a ircclient too :) 2005-11-12 05:06:57 +00:00
14134694d7 2005-11-11 jrandom
* Add filtering threads by author to Syndie, populated with authors in the
      user's addressbook
    * When creating the default user, add
      "http://syndiemedia.i2p/archive/archive.txt" to their addressbook,
      configured to automatically pull updates.  (what other archives should
      be included?)
    * Tiny servlet to help dole out the new routerconsole themes, and bundle
      the installer/resources/themes/** into ./docs/themes/** on both install
      and update.
2005-11-12 05:03:51 +00:00
807d2d3509 2005-11-11 cervantes
* Initial pass of the routerconsole revamp, starting with I2PTunnel and
	  being progressively rolled out to other sections at later dates.
	  Featuring abstracted W3C strict XHTML1.0 markup, with CSS providing
	  layout and styling.
	* Implemented console themes. Users can create their own themes by
	  creating css files in: {i2pdir}/docs/themes/console/{themename}/
	  and activating it using the routerconsole.theme={themename} advanced
	  config property. Look at the example incomplete "defCon1" theme.
	  Note: This is very much a work in progress. Folks might want to hold-off
	  creating their own skins until the markup has solidified.
	* Added "routerconsole.javascript.disabled=true" to disable console
	  client-side scripting and "routerconsole.css.disabled=true" to remove
	  css styling (only rolled out in the i2ptunnel interface currently)
	* Fixed long standing bug with i2ptunnel client and server edit screens
	  where tunnel count and depth properties would fail to save. Added
	  backup quantity and variance configuration options.
	* Added basic accessibility support (key shortcuts, linear markup, alt and
	  title information and form labels).
	* So far only tested on IE6, Firefox 1.0.6, Opera 8 and lynx.
2005-11-12 02:52:40 +00:00
b222cd43f4 2005-11-11 cervantes
* Initial pass of the routerconsole revamp, starting with I2PTunnel and
	  being progressively rolled out to other sections at later dates.
	  Featuring abstracted W3C strict XHTML1.0 markup, with CSS providing
	  layout and styling.
	* Implemented console themes. Users can create their own themes by
	  creating css files in: {i2pdir}/docs/themes/console/{themename}/
	  and activating it using the routerconsole.theme={themename} advanced
	  config property. Look at the example incomplete "defCon1" theme.
	  Note: This is very much a work in progress. Folks might want to hold-off
	  creating their own skins until the markup has solidified.
	* Added "routerconsole.javascript.disabled=true" to disable console
	  client-side scripting and "routerconsole.css.disabled=true" to remove
	  css styling (only rolled out in the i2ptunnel interface currently)
	* Fixed long standing bug with i2ptunnel client and server edit screens
	  where tunnel count and depth properties would fail to save. Added
	  backup quantity and variance configuration options.
	* Added basic accessibility support (key shortcuts, linear markup, alt and
	  title information and form labels).
	* So far only tested on IE6, Firefox 1.0.6, Opera 8 and lynx.
2005-11-12 02:38:55 +00:00
7f6aa327f2 allow the user to override what login/pass is used for the default user in single user mode 2005-11-11 12:26:17 +00:00
49564a3878 2005-11-11 jrandom
* Default Syndie to single user mode, and automatically log into a default
      user account (additional accounts can be logged into with the 'switch'
      or login pages, and new accounts can be created with the register page).
    * Disable the 'automated' column on the Syndie addressbook unless the user
      is appropriately authorized (good idea Polecat!)
2005-11-11 11:29:15 +00:00
12ddaff0ce .. 2005-11-11 10:09:04 +00:00
a8ea239dcc *cough* 2005-11-11 09:58:03 +00:00
ca391097a9 onwards, christian soldiers 2005-11-11 09:39:48 +00:00
4297edc88f wait until we build the tree before sorting the threads, so we can have the most recently updated thread first 2005-11-11 05:10:38 +00:00
6de4673e9e 2005-11-10 jrandom
* First pass to a new threaded Syndie interface, which isn't enabled by
      default, as its not done yet.
2005-11-11 03:46:36 +00:00
f6979c811f 2005-11-10 jrandom
* First pass to a new threaded Syndie interface, which isn't enabled by
      default, as its not done yet.
2005-11-11 03:41:16 +00:00
bd86483204 2005-11-06 jrandom
* Include SSU establishment failure in the peer profile as a commError,
      as we do for TCP establishment failures.
    * Don't throttle the initial transmission of a message because of ongoing
      retransmissions to a peer, since the initial transmission of a message
      is more valuable than a retransmission (since it has less latency).
    * Cleaned up links to SusiDNS and I2PTunnel (thanks zzz!)
2005-11-06 22:25:17 +00:00
53cf03cec6 no need for ant's IgnoreBlank (and its not in the classpath anyway) 2005-11-05 22:50:14 +00:00
e284a8878b 2005-11-05 jrandom
* Include the most recent ACKs with packets, rather than only sending an
      ack exactly once.  SSU differs from TCP in this regard, as TCP has ever
      increasing sequence numbers, while each message ID in SSU is random, so
      we don't get the benefit of later ACKs implicitly ACKing earlier
      messages.
    * Reduced the max retransmission timeout for SSU
    * Don't try to send messages queued up for a long time waiting for
      establishment.
2005-11-05 21:19:05 +00:00
14cd469c6d 2005-11-05 jrandom
* Include the most recent ACKs with packets, rather than only sending an
      ack exactly once.  SSU differs from TCP in this regard, as TCP has ever
      increasing sequence numbers, while each message ID in SSU is random, so
      we don't get the benefit of later ACKs implicitly ACKing earlier
      messages.
    * Reduced the max retransmission timeout for SSU
    * Don't try to send messages queued up for a long time waiting for
      establishment.
2005-11-05 21:12:57 +00:00
9050d7c218 2005-11-05 dust
* Fix sucker to delete its temporary files.
    * Improve sucker's sml output some.
    * Fix Exception in SMLParser for weird sml.
2005-11-05 11:01:57 +00:00
0ad18cd0ba 2005-10-31 jrandom
* Fix for some syndie reply scenarios (thanks identiguy and CofE!)
    * Removed a potentially infinitely recursive call (oops)
(forgot to commit this file before.  oops)
2005-11-04 03:08:00 +00:00
ca0af146b7 2005-11-03 zzz
* Added a new error page to the eepproxy to differentiate the full 60
      second timeout from the immediate "I don't know this base64" failure.
2005-11-04 01:20:17 +00:00
a2d2b031f4 2005-11-01 jrandom
* Added a few more css elements (thanks identiguy!)
2005-11-02 00:35:21 +00:00
2f36912ac0 2005-10-31 jrandom
* Fix for some syndie reply scenarios (thanks identiguy and CofE!)
    * Removed a potentially infinitely recursive call (oops)
2005-10-31 20:03:11 +00:00
53e32c8e64 *** empty log message *** 2005-10-30 07:38:42 +00:00
10dde610dc 2005-10-30 dust
* Merge sucker into syndie with a rssimport.jsp page.
    * Add getContentType() to EepGet.
    * Make chunked transfer work (better) with EepGet.
    * Do replaceAll("<","&lt;") for logs.
2005-10-30 05:47:55 +00:00
f7c2ae9a3b adjust the exe build filter to only run on linux or windows 2005-10-30 01:07:09 +00:00
ac3b88b9e9 0.6.1.4 2005-10-29 23:22:11 +00:00
60124cdcdc * 2005-10-29 0.6.1.4 released 2005-10-29 23:20:04 +00:00
e7d2281772 2005-10-29 jrandom
* Improved the bandwidth throtting on tunnel participation, especially for
      low bandwidth peers.
    * Improved failure handling in SSU with proactive reestablishment of
      failing idle peers, and rather than shitlisting a peer who failed too
      much, drop the SSU session and allow a new attempt (which, if it fails,
      will cause a shitlisting)
    * Clarify the cause of the shitlist on the profiles page, and include
      bandwidth limiter info at the bottom of the peers page.
2005-10-29 21:43:41 +00:00
52ace2d695 2005-10-29 jrandom
* Improved the bandwidth throtting on tunnel participation, especially for
      low bandwidth peers.
    * Improved failure handling in SSU with proactive reestablishment of
      failing idle peers, and rather than shitlisting a peer who failed too
      much, drop the SSU session and allow a new attempt (which, if it fails,
      will cause a shitlisting)
    * Clarify the cause of the shitlist on the profiles page, and include
      bandwidth limiter info at the bottom of the peers page.
2005-10-29 21:35:24 +00:00
b5a25801b4 2005-10-26 jrandom
* In Syndie, propogate the subject and tags in a reply, and show the parent
      post on the edit page for easy quoting.  (thanks identiguy and CofE!)
    * Streamline some netDb query handling to run outside the jobqueue -
      which means they'll run on the particular SSU thread that handles the
      message.  This should help out heavily loaded netDb peers.
2005-10-28 22:26:47 +00:00
3fc0558810 added ttc.i2p 2005-10-28 01:13:10 +00:00
bd9c6ff463 oops (allow cwin=1 for interactive streams) 2005-10-25 20:19:49 +00:00
a0c822af96 2005-10-25 jrandom
* Defer netDb searches for newly referenced peers until we actually want
      them
    * Ignore netDb references to peers on our shitlist
    * Set the timeout for end to end client messages to the max delay after
      finding the leaseSet, so we don't have as many expired messages floating
      around.
    * Add a floor to the streaming lib window size
    * When we need to send a streaming lib ACK, try to retransmit one of the
      unacked packets instead (with updated ACK/NACK fields, of course).  The
      bandwidth cost of an unnecessary retransmission should be minor as
      compared to both an ACK packet (rounded up to 1KB in the tunnels) and
      the probability of a necessary retransmission.
    * Adjust the streaming lib cwin algorithm to allow growth after a full
      cwin messages if the rtt is trending downwards.  If it is not, use the
      existing algorithm.
    * Increased the maximum rto size in the streaming lib.
    * Load balancing bugfix on end to end messages to distribute across
      tunnels more evenly.
2005-10-25 19:23:17 +00:00
4de302101d 2005-10-24 jrandom
* Defer netDb searches for newly referenced peers until we actually want
      them
    * Ignore netDb references to peers on our shitlist
    * Set the timeout for end to end client messages to the max delay after
      finding the leaseSet, so we don't have as many expired messages floating
      around.
    * Add a floor to the streaming lib window size
    * When we need to send a streaming lib ACK, try to retransmit one of the
      unacked packets instead (with updated ACK/NACK fields, of course).  The
      bandwidth cost of an unnecessary retransmission should be minor as
      compared to both an ACK packet (rounded up to 1KB in the tunnels) and
      the probability of a necessary retransmission.
    * Adjust the streaming lib cwin algorithm to allow growth after a full
      cwin messages if the rtt is trending downwards.  If it is not, use the
      existing algorithm.
    * Increased the maximum rto size in the streaming lib.
    * Load balancing bugfix on end to end messages to distribute across
      tunnels more evenly.
2005-10-25 19:13:53 +00:00
84383c3dab * Enforce delay >= 1 in BlogManager.getUpdateDelay() 2005-10-25 01:24:32 +00:00
a94abb13a4 * Factor archive fetching into a seperate method from the update loop. 2005-10-25 01:03:55 +00:00
ad59ab691b (sun.* is sun specific) 2005-10-24 09:45:00 +00:00
ee9ac31c8b * Instantiate new RemoteArchiveBean for each archive fetched by the updater, to prevent weirdness if the index fetch for archive n+1 fails.
* Add a blocking fetch to EepGetScheduler and RemoteArchiveBean and use them from the updater, to prevent race conditions with multiple archive fetches.
2005-10-24 06:27:56 +00:00
788998307a * Fixed bugs preventing the use of fetchSelectedBulk in the updater 2005-10-24 05:27:22 +00:00
2f8a2879bb no more -Di2p.weakPRNG :) 2005-10-22 18:12:09 +00:00
c7b9525d2c 2005-10-22 jrandom
* Integrated GNU-Crypto's Fortuna PRNG, seeding it off /dev/urandom and
      ./prngseed.rnd (if they exist), and reseeding it with data out of
      various crypto operations (unused bits in a DH exchange, intermediary
      bits in a DSA signature generation, extra bits in an ElGamal decrypt).
      The Fortuna implementation under gnu.crypto.prng has been modified to
      use BouncyCastle's SHA256 and Cryptix's AES (since those are the ones
      I2P uses), and the resulting gnu.crypto.prng.* are therefor available
      under GPL+Classpath's linking exception (~= LGPL).  I2P's SecureRandom
      wrapper around it is, of course, public domain.
2005-10-22 18:06:02 +00:00
3fbc6f41af first pass gcj makefile
I have no idea what I'm doing, this just shows you how to do it ;)
2005-10-21 01:30:13 +00:00
6534c84578 Cought a few places where peers could be taken off the list but not removed for the purposes of rarest-first calculations. 2005-10-21 01:09:31 +00:00
3816c79193 Clean up some possible thread safety issues. 2005-10-21 00:10:13 +00:00
8458e4e0af Don't count peers we can't connect to for rarest-first calculations. 2005-10-20 23:28:32 +00:00
0b9e4967a0 The rarest-first sort is stable, so randomize the wantedPieces list to ensure a healthy swarm when you have mostly seeders. 2005-10-20 22:31:28 +00:00
05e2da7c22 Request pieces in rarest-first order, instead of randomly. 2005-10-20 22:05:51 +00:00
13bda1f6d7 2005-10-20 dust
* Fix bug in ircclient that prevented it to use its own dest (i.e. was
      always shared. (thx for info Ragnarok)
    * Fix crash in Sucker with some bad html.
2005-10-20 19:42:13 +00:00
ea22c73a73 2005-10-20 jrandom
* Workaround a bug in GCJ's Calendar implementation
    * Propery throw an exception in the streaming lib if we try to write to a
      closed stream.  This will hopefully help clear some I2Phex bugs (thanks
      GregorK!)
2005-10-20 08:56:39 +00:00
aa5f1cb18d 2005-10-19 jrandom
* Ported the snark bittorrent client to I2P such that it is compatible
      with i2p-bt and azneti2p.  For usage information, grab an update and run
      "java -jar lib/i2psnark.jar".  It isn't currently multitorrent capable,
      but adding in support would be fairly easy (see PeerAcceptor.java:49)
    * Don't allow leaseSets expiring too far in the future (thanks postman)
2005-10-19 23:23:19 +00:00
ab9c6d59cf remove jbigi.jar from the classpath so we don't mistakenly extract & overwrite the
libjbigi.so.  Yeah, this means that i2psnark uses pure java modPow, but it doesn't do
any real heavy lifting anyway, except a DSA signature every 5-10 minutes.  whoop de do.
2005-10-19 23:18:29 +00:00
76655d01d0 2005-10-19 jrandom
* Ported the snark bittorrent client to I2P such that it is compatible
      with i2p-bt and azneti2p.  For usage information, grab an update and run
      "java -jar lib/i2psnark.jar".  It isn't currently multitorrent capable,
      but adding in support would be fairly easy (see PeerAcceptor.java:49)
    * Don't allow leaseSets expiring too far in the future (thanks postman)
2005-10-19 22:38:45 +00:00
138f7d3b8d This is an I2P port of snark [http://klomp.org/snark], a GPL'ed bittorrent client
The build in tracker has been removed for simplicity.

Example usage:
  java -jar lib/i2psnark.jar myFile.torrent

or, a more verbose setting:
  java -jar lib/i2psnark.jar --eepproxy 127.0.0.1 4444 \
       --i2cp 127.0.0.1 7654 "inbound.length=2 outbound.length=2" \
       --debug 6 myFile.torrent
2005-10-19 22:02:37 +00:00
df4b998a6a 2005-10-19 jrandom
* Bugfix for the auto-update code to handle different usage patterns
    * Decreased the addressbook recheck frequency to once every 12 hours
      instead of hourly.
    * Handle dynamically changing the HMAC size (again, unless your nym is
      toad or jrandom, ignore this ;)
    * Cleaned up some synchronization/locking code
2005-10-19 05:15:12 +00:00
2d70103f88 2005-10-17 dust
* Exchange the remaining URL with EepGet in Sucker.
    * Allow /TOPIC irc command.
2005-10-18 03:14:01 +00:00
731e26e7d6 2005-10-17 jrandom
* Allow an env prop to configure whether we want to use the backwards
      compatible (but not standards compliant) HMAC-MD5, or whether we want
      to use the not-backwards compatible (but standards compliant) one.  No
      one should touch this setting, unless your name is toad or jrandom ;)
    * Added some new dummy facades
    * Be more aggressive on loading up the router.config before building the
      router context
    * Added new hooks for apps to deal with previously undefined I2NP message
      types without having to modify any code.
    * Demo code for using a castrated router for SSU comm (SSUDemo.java)
2005-10-18 00:39:46 +00:00
f9d3b157f0 Emergency patch to I2PTunnelIRCClient, not sure why zero length strings are getting passed to it. Oh, and my first patch\! 2005-10-15 22:55:14 +00:00
12775c416d * Don't filter IRC "MAP" messages (not critical, but it doesn't hurt) 2005-10-14 16:26:31 +00:00
2ca4e63216 * Fixed Syndie's Sucker to not explicitly reference something only found
in sun's JVM (thanks cervantes!)
2005-10-14 16:02:38 +00:00
918d8f851f 2005-10-14 jrandom
* More explicit filter for linux/PPC building (thanks anon!)
2005-10-14 15:05:26 +00:00
20ea680ff0 * 2005-10-14 0.6.1.3 released
2005-10-14  jrandom
    * Added a key explaining peers.jsp a bit (thanks tethra!)
2005-10-14 13:48:04 +00:00
cabb607211 2005-10-13 dust
* Bundled dust's Sucker for pulling RSS/Atom content into SML, which can
      then be injected into Syndie with the Syndie CLI.
    * Bundled ROME and JDOM (BSD and Apache licensed, respectively) for
      RSS/Atom parsing.
2005-10-13  jrandom
    * SSU retransmission choke bugfix (== != !=)
    * Include initial transmissions in the retransmission choke, so that
      if we are already retransmitting a message, we won't send anything
      to that peer other than that message (or ACKs, if necessary)
2005-10-14 02:15:39 +00:00
00a4761b5e 2005-10-13 jrandom
* SSU retransmission choke bugfix (== != !=)
    * Include initial transmissions in the retransmission choke, so that
      if we are already retransmitting a message, we won't send anything
      to that peer other than that message (or ACKs, if necessary)
2005-10-13 09:18:35 +00:00
3516701272 2005-10-12 jrandom
* Choke SSU retransmissions to a peer while there is already a
      retransmission in flight to them.  This currently lets other initial
      transmissions through, since packet loss is often sporadic, but maybe
      this should block initial transmissions as well?
    * Display the retransmission bytes stat on peers.jsp (thanks bar!)
    * Filter QUIT messages in the I2PTunnelIRCClient proxy
2005-10-13 03:54:54 +00:00
c4d785667a 2005-10-11 jrandom
* Piggyback the SSU partial ACKs with data packets.  This is backwards
      compatible.
    * Syndie RSS renderer bugfix, plus now include the full entry instead of
      just the blurb before the cut.
2005-10-11 21:05:14 +00:00
123e0ba589 2005-10-11 jrandom
* Piggyback the SSU explicit ACKs with data packets (partial ACKs aren't
      yet piggybacked).  This is backwards compatible.
    * SML parser cleanup in Syndie
2005-10-11 07:07:39 +00:00
197237aa32 2005-10-10 dust
* Implemented a new I2PTunnelIRCClient which locally filters inbound and
      outbound IRC commands for anonymity and security purposes, removing all
      CTCP messages except ACTION, as well as stripping the hostname from the
      USER message (while leaving the nick and 'full name').  The IRC proxy
      doesn't use this by default, but you can enable it by creating a new
      "IRC proxy" tunnel on the web interface, or by changing the tunnel type
      to "ircclient" in i2ptunnel.config.
2005-10-10  jrandom
    * I2PTunnel http client config cleanup and stats
    * Minor SSU congestion tweaks and stats
    * Reduced netDb exploration period
2005-10-10 23:05:18 +00:00
f30dc2b480 2005-10-10 dust
* Implemented a new I2PTunnelIRCClient which locally filters inbound and
      outbound IRC commands for anonymity and security purposes, removing all
      CTCP messages except ACTION, as well as stripping the hostname from the
      USER message (while leaving the nick and 'full name').  The IRC proxy
      doesn't use this by default, but you can enable it by creating a new
      "IRC proxy" tunnel on the web interface, or by changing the tunnel type
      to "ircclient" in i2ptunnel.config.
2005-10-10  jrandom
    * I2PTunnel http client config cleanup and stats
    * Minor SSU congestion tweaks and stats
    * Reduced netDb exploration period
2005-10-10 22:58:18 +00:00
d4ff34eacb * Treat petname names as being case insensitive. 2005-10-09 22:56:02 +00:00
978769a05d * Finished syndie address auto-import. If syndie.importAddresses=true in syndie.config, then new addresses will automatically be imported to the router's petname db. If importaddresses=true in a user's config file, then new addresses will automatically be imported to that users pername db, when they're logged in. 2005-10-09 20:16:30 +00:00
993c70f600 2005-10-09 jrandom
* Syndie CLI cleanup for simpler CLI posting.  Usage shown with
      java -jar lib/syndie.jar
    * Beginnings of the Syndie logging cleanup
    * Delete corrupt Syndie posts
2005-10-09 11:35:13 +00:00
5dfa9ad7f6 2005-10-09 jrandom
* Now that the streaming lib works reasonably, set the default inactivity
      event to send a 0 byte keepalive payload, rather than disconnecting the
      stream.  This should cut the irc netsplits and help out with other long
      lived streams.  The default timeout is now less than the old timeout as
      well, so the keepalive will be sent before earlier builds fire their
      fatal timeouts.
2005-10-09 05:46:57 +00:00
f282fe3854 * Fix probable NPE. 2005-10-09 04:01:29 +00:00
9297564555 * getName, getLocation -> getByName, getByLocation 2005-10-09 03:47:12 +00:00
e7ad516685 * Beautify PetNameDB API.
* Start of syndie auto address export.
2005-10-09 03:32:34 +00:00
ad574c8504 2005-10-08 jrandom
* Use the OS clock for stat timing, since it doesn't jump around (though
      still use the NTP'ed clock for display)
    * Added new DH stats
2005-10-08 22:05:46 +00:00
38617fe0a7 fixed link (thanks Jazzy) 2005-10-07 23:45:48 +00:00
cdee5b2c31 * 2005-10-07 0.6.1.2 released
2005-10-07  jrandom
    * Include the 1 second bandwidth usage on the console rather than the
      1 minute rate, as the 1 second value doesn't have the 1m/5m quantization
      issues.
2005-10-07 20:19:04 +00:00
7f6e65c76f 2005-10-07 jrandom
* Allow the I2PTunnelHTTPServer to send back the first few packets of an
      HTTP response quicker, and initialize the streaming lib's cwin more
      carefully.
    * Added a small web UI to the new Syndie scheduled updater.  If you log in
      as a user authorized to use the remote archive funtionality, you can
      request remote archives in your address book to be automatically pulled
      down by checking the "scheduled?" checkbox.
2005-10-07 10:22:59 +00:00
4dd628dbc8 2005-10-05 jrandom
* Allow the first few packets in the stream to fill in their IDs during
      handshake (thanks cervantes, Complication, et al!)  This should fix at
      least some of the intermittent HTTP POST issues.
2005-10-05 23:24:33 +00:00
3b5b48ad8a more cleanup (thanks bar) 2005-10-05 03:48:56 +00:00
c4cac3f3f1 we dont need no grammar 2005-10-05 01:45:21 +00:00
4a49e98c31 caps (thanks bar) 2005-10-05 01:11:25 +00:00
0c0e269e72 youspell (thanks bar) 2005-10-05 00:52:27 +00:00
70b6f97abe 2005-10-04 jrandom
* Syndie patch for single user remote archives (thanks nickless_head!)
    * Handle an invalid netDb store (thanks Complication!)
2005-10-04 23:43:05 +00:00
0013677b83 v2mail, not v2mail2 2005-10-04 23:34:19 +00:00
a98ceda64d postmans changes for i2pmail stuff 2005-10-04 23:33:15 +00:00
91ea1d0395 include two specific issues with freenet 2005-10-04 20:18:18 +00:00
4aa65c3bb3 2005-10-04 jrandom
* Further reduction in unnecessary streaming packets.
2005-10-04 07:36:25 +00:00
0a1f59940a 2005-10-03 jrandom
* Properly reject unroutable IP addresses *cough*
2005-10-04 02:05:52 +00:00
f540dc798b * Changed default update delay to twelve hours, and enforced a minimum
delay of one hour.
2005-10-04 00:27:34 +00:00
30f6f26a68 * Implemented a Syndie auto-updater. 2005-10-03 18:55:09 +00:00
ea3bf3ffc8 might as well commit this draft 2005-10-03 06:21:08 +00:00
831d5ac70c not used 2005-10-03 06:20:47 +00:00
1962867ad9 * Actually implement the bit that returns a 304 if archive.txt hasn't changed. 2005-10-03 02:54:34 +00:00
6019a03029 * 2005-10-01 0.6.1.1 released 2005-10-01 19:20:09 +00:00
df5736f571 * Add a notModified flag to Eepget and Eepget status listeners. 2005-10-01 00:57:32 +00:00
9a73c6defe * Support conditional get for remote archive imports. 2005-09-30 23:42:28 +00:00
9dfa87ba47 2005-09-30 jrandom
* Killed three more streaming lib bugs, one of which caused excess packets
      to be transmitted (dupacking dupacks), one that was the root of many of
      the old hung streams (shrinking highest received), and another that was
      releasing data too soon.
2005-09-30 23:12:57 +00:00
934a269753 2005-09-30 jrandom
* Only allow autodetection of our IP address if we haven't received an
      inbound connection in the last two minutes.
    * Increase the default max streaming resends to 8 from 5 (and down from
      the earlier 10)
2005-09-30 20:29:19 +00:00
1c0dfc242b * Fix history.txt formatting. 2005-09-30 17:51:32 +00:00
3bc3e5d47e * Export petnames from syndie to the router's petname db instead of userhosts.txt. 2005-09-30 07:32:46 +00:00
55869af2cc 2005-09-29 jrandom
* Support noreseed.i2p in addition to .i2pnoreseed for disabling automatic
      reseeding - useful on OSes that make it hard to create dot files.
      Thanks Complication (and anon)!
    * Fixed the installer version string (thanks Frontier!)
    * Added cleaner rejection of invalid IP addresses, shitlist those who send
      us invalid IP addresses, verify again that we are not sending invalid IP
      addresses, and log an error if it happens. (Thanks Complication, ptm,
      and adab!)
2005-09-30 07:17:56 +00:00
9f336dd05b Provide a store method on PetNameDB that takes no arguments, and writes the db back to where it was loaded from. 2005-09-30 05:28:31 +00:00
411ca5e6c3 Ignore case when checking network name. 2005-09-30 05:20:41 +00:00
c528e4db03 * 2005-09-29 0.6.1 released
2005-09-29  jrandom
    * Let syndie users modify their metadata.
    * Reseed the router on startup if there aren't enough peer references
      known locally.  This can be disabled by creating the file .i2pnoreseed
      in your home directory, and the existing detection and reseed handling
      on the web interface is unchanged.
2005-09-29 19:24:43 +00:00
848ead7683 * 2005-09-29 0.6.1 released
2005-09-29  jrandom
    * Let syndie users modify their metadata.
    * Reseed the router on startup if there aren't enough peer references
      known locally.  This can be disabled by creating the file .i2pnoreseed
      in your home directory, and the existing detection and reseed handling
      on the web interface is unchanged.
2005-09-29 19:19:22 +00:00
1b8419b9b5 added tracker-fr.i2p 2005-09-29 03:54:30 +00:00
900420719e 2005-09-28 jrandom
* Fix for at least some (all?) of the wrong stream errors in the streaming
      lib
2005-09-28 09:17:54 +00:00
ef7d1ba964 2005-09-27 jrandom
* Properly suggest filenames for attachments in Syndie (thanks all!)
    * Fixed the Syndie authorization scheme for single user vs. multiuser
2005-09-27 22:42:49 +00:00
ab1654c784 ; (1.181) added syncline.i2p, cerebrum.i2p, news.underscore.i2p,
;               onionforum.i2p, frostmirror.i2p, ptm.i2p, gloinsblog.i2p
;               underscore.i2p, mac7.i2p, wiht.i2p, jazzy.i2p, trwcln.i2p
2005-09-27 22:21:05 +00:00
24bad8e4bb 2005-09-26 jrandom
* I2PTunnel bugfix (thanks Complication!)
    * Increase the SSU cwin slower during congestion avoidance (at k/cwin^2
      instead of k/cwin)
    * Limit the number of inbound SSU sessions being built at once (using
      half of the i2np.udp.maxConcurrentEstablish config prop)
    * Don't shitlist on a message send failure alone (unless there aren't any
      common transports).
    * More careful bandwidth bursting
2005-09-27 07:17:40 +00:00
f6d8200bc8 oops (thanks Complication!) 2005-09-27 00:56:49 +00:00
aef33548b3 2005-09-26 jrandom
* Reworded the SSU introductions config section (thanks duck!)
    * Force identity content encoding for I2PTunnel httpserver requests
      (thanks redzara!)
    * Further x-i2p-gzip bugfixes for the end of streams
    * Reduce the minimum bandwidth limits to 3KBps steady and burst (though
      I2P's performance at 3KBps is another issue)
    * Cleaned up some streaming lib structures
2005-09-26 23:45:52 +00:00
56ecdcce82 2005-09-25 jrandom
* Allow reseeding on the console if the netDb knows less than 30 peers,
      rather than less than 10 (without internet connectivity, we keep the
      last 15 router references)
    * Reenable the x-i2p-gzip HTTP processing by default, flushing the stream
      more aggressively.
    * Show the status that used to be called "ERR-Reject" as "OK (NAT)"
    * Reduced the default maximum number of streaming lib resends of a packet
      (10 retransmits is a bit much with a reasonable RTO)
2005-09-25 23:52:58 +00:00
b9b59ff95f 2005-09-25 Complication
* Better i2paddresshelper handling in the I2PTunnel httpclient, plus a new
      conflict resolution page if the i2paddresshelper parameter differs from
      an existing name to destination mapping.
2005-09-25  jrandom
    * Fix a long standing streaming lib bug (in the inactivity detection code)
    * Improved handling of initial streaming lib packet retransmissions to
      kill the "lost first packet" bug (where a page shows up with the first
      few KB missing)
    * Add support for initial window sizes greater than 1 - useful for
      eepsites to transmit e.g. 4 packets full of data along with the initial
      ACK, thereby cutting down on the rtt latency.  The congestion window
      size can and does still shrink down to 1 packet though.
    * Adjusted the streaming lib retransmission calculation algorithm to be
      more TCP-like.
2005-09-25 09:28:59 +00:00
aa9dd3e5c6 mention syndie bug stuff (good idea jnymo) 2005-09-24 04:08:42 +00:00
30bd659149 2005-09-21 redzara
* Use ISO-8859-1 for the susidns xml
2005-09-21 23:01:00 +00:00
3286ca49c8 2005-09-21 susi
* Bugfix in susidns for deleting entries
2005-09-21  jrandom
    * Add support for HTTP POST to EepGet
    * Use HTTP POST for syndie bulk fetches, since there's a lot of data to
      put in that URL.
2005-09-21 06:43:04 +00:00
7700d12178 damn thee, syntax 2005-09-20 03:43:57 +00:00
557b7e3f2e only build exe files on ant dist or ant installer 2005-09-20 03:38:14 +00:00
3e1e9146e1 don't build the exe files on x86_64 or osx 2005-09-20 03:17:06 +00:00
40d8d1aac1 * Made MetaNamingService the default naming service. 2005-09-19 00:56:47 +00:00
1457b8efba include the lib64 wrapper (thanks mule) 2005-09-19 00:36:59 +00:00
3821e80ac8 2005-09-18 jrandom
* Added support for pure 64bit linux with jbigi and the java service
      wrapper (no need for jcpuid if we're on os.arch=amd64).  Thanks mule
      et al for help testing!
    * UI cleanup in Syndie (thanks gloin and bar!)
2005-09-18 23:08:16 +00:00
d40bb459ea * Get the PetNameDB for the PetNameNamingService from the router context. 2005-09-18 22:36:10 +00:00
edf04f07c9 * Implemented a MetaNamingService. 2005-09-18 08:50:56 +00:00
2bdea23986 * Updated history.txt. 2005-09-18 05:41:46 +00:00
6be0c4b694 * Moved PetName and PetNameDB to core.
* Implement PetNameNamingService.
2005-09-18 05:28:51 +00:00
2a272f465c * 2005-09-17 0.6.0.6 released
2005-09-17  jrandom
    * Clean up syndie a bit more and bundle a default introductory post with
      both new installs and updates.
    * Typo fixes on the console (thanks bar!)
2005-09-18 01:29:58 +00:00
a8ecd32b45 2005-09-17 jrandom
* Updated the bandwidth limiter to use two tiers of bandwidth - our normal
      steady state rate, plus a new limit on how fast we transfer when
      bursting.  This is different from the old "burst as fast as possible
      until we're out of tokens" policy, and should help those with congested
      networks.  See /config.jsp to manage this rate.
    * Bugfixes in Syndie to handle missing cache files (no data was lost, the
      old posts just didn't show up).
    * Log properly in EepPost
2005-09-17 23:01:44 +00:00
20c42a175d 2005-09-17 jrandom
* Bugfixes in Syndie to handle missing cache files (no data was lost, the
      old posts just didn't show up).
    * Log properly in EepPost
2005-09-17 20:08:25 +00:00
d6c3ffde87 2005-09-17 jrandom
* Added the natively compiled jbigi and patched java service wrapper for
      OS X.  Thanks Bill Dorsey for letting me use your machine!
    * Don't build i2p.exe or i2pinstall.exe when run on OS X machines, as we
      don't bundle the binutils necessary (and there'd be a naming conflict
      if we did).
    * Added 'single user' functionality to syndie - if the single user
      checkbox on the admin page is checked, all users are allowed to control
      the instance and sync up with remote syndie nodes.
    * Temporarily disable the x-i2p-gzip in i2ptunnel until it is more closely
      debugged.
2005-09-17 07:31:48 +00:00
177e0ae6a3 2005-09-16 jrandom
* Reject unroutable IPs in SSU like we do for the TCP transport (unless
      you have i2np.udp.allowLocal=true defined - useful for private nets)
2005-09-16 21:24:42 +00:00
dab1b4d256 2005-09-16 jrandom
* Adjust I2PTunnelHTTPServer so it can be used for outproxy operators
      (just specify the spoofed host as an empty string), allowing them to
      honor x-i2p-gzip encoding.
    * Let windows users build the exes too (thanks bar and redzara!)
    * Allow I2PTunnel httpserver operators to disable gzip compression on
      individual tunnels with the i2ptunnel.gzip=false client option
      (good idea susi!)
2005-09-16 18:28:26 +00:00
3aba12631b use the logger, not stdout/stderr 2005-09-16 06:58:55 +00:00
cfee6430d4 xml 2005-09-16 04:54:48 +00:00
6b96df1cec runplain.sh, not startRouter.sh 2005-09-16 04:50:28 +00:00
deecfa5047 no message 2005-09-16 04:40:07 +00:00
6ca3f01038 launch4j 2005-09-16 04:34:59 +00:00
d89f589f2b 2005-09-16 jrandom
* Added the i2p.exe and i2pinstall.exe for windows users, using launch4j.
    * Added runplain.sh for *nix/osx users having problems using the java
      service wrapper (called from the install dir as: sh runplain.sh)
    * Bundle susidns and syndie, with links on the top nav
    * Have I2PTunnelHTTPClient and I2PTunnelHTTPServer use the x-i2p-gzip
      content-encoding (if offered), reducing the payload size before it
      reaches the streaming lib.  The existing compression is at the i2cp
      level, so we've been packetizing 4KB of uncompressed data and then
      compressing those messages, rather than compressing and then packetizing
      4KB of compressed data.  This should reduce the number of round trips
      to fetch web pages substantially.
    * Adjust the startup and timing of the addressbook so that susidns always
      has config to work off, and expose a method for susidns to tell it to
      reload its config and rerun.
2005-09-16 04:12:24 +00:00
8c1895e04f imported fixed susidns 2005-09-16 04:04:40 +00:00
c3d0132a98 test cvs again... 2005-09-15 05:57:28 +00:00
d955279d17 minor news (and test cvs...) 2005-09-15 05:56:10 +00:00
76266dce0d 2005-09-15 jrandom
* Error handling for failed intro packets (thanks red.hand!)
    * More carefully verify intro addresses
2005-09-15 05:39:31 +00:00
5694206b35 2005-09-13 jrandom
* More careful error handling with introductions (thanks dust!)
    * Fix the forceIntroducers checkbox on config.jsp (thanks Complication!)
    * Hide the shitlist on the summary so it doesn't confuse new users.
2005-09-13 23:02:35 +00:00
4293a18726 2005-09-12 comwiz
* Migrated the router tests to junit
2005-09-13 09:06:07 +00:00
9865af4174 2005-09-12 jrandom
* Removed guaranteed delivery mode entirely (so existing i2phex clients
      using it can get the benefits of mode=best_effort).  Guaranteed delivery
      is offered at the streaming lib level.
    * Improve the peer selection code for peer testing, as everyone now
      supports tests.
    * Give the watchdog its fangs - if it detects obscene job lag or if
      clients have been unable to get a leaseSet for more than 5 minutes,
      restart the router.  This was disabled a year ago due to spurious
      restarts, and can be disabled by "watchdog.haltOnHang=false", but the
      cause of the spurious restarts should be gone.
2005-09-13 03:32:29 +00:00
c8c109093d 2005-09-12 jrandom
* Bugfix for skewed store which could kill a UDP thread (causing complete
      comm failure and eventual OOM)
2005-09-13 01:12:43 +00:00
b5784d6025 2005-09-12 jrandom
* More aggressively publish updated routerInfo.
    * Expose the flag to force SSU introductions on the router console
    * Don't give people the option to disable SNTP time sync, at least not
      through the router console, because there is no reason to disable it.
      No, not even if your OS is "ntp synced", because chances are, its not.
2005-09-13 00:11:56 +00:00
31bdb8909a tino.i2p and fproxy.tino.i2p 2005-09-12 22:40:17 +00:00
ee921c22ae use the low level rates (thanks bar / complication) 2005-09-12 02:58:13 +00:00
172ffd0434 use the OS time, since it doesn't skew as much (especially on startup) 2005-09-11 04:37:15 +00:00
d9b4406c09 2005-09-10 jrandom
* Test the router's reachability earlier and more aggressively
    * Use the low level bandwidth limiter's rates for the router console, and
      if the router has net.i2p.router.transport.FIFOBandwidthLimiter=INFO in
      the logger config, keep track of the 1 second transfer rates as the stat
      'bw.sendBps1s' and 'bw.recvBps1s', allowing closer monitoring of burst
      behavior.
2005-09-11 03:22:51 +00:00
8ac0e85df4 updated to hq.postman.i2p 2005-09-11 01:40:15 +00:00
249ccd5e3c now that its all implemented... 2005-09-10 23:18:41 +00:00
727d76d43e deal with posts containing no tags by using the implicit tag "[none]" (thanks ardvark!) 2005-09-10 06:07:25 +00:00
44770b7c07 2005-09-09 jrandom
* Added preliminary support for NAT hole punching through SSU introducers
    * Honor peer test results from peers that we have an SSU session with if
      those sessions are idle for 3 minutes or more.
2005-09-10 04:30:36 +00:00
b5d571c75f 2005-09-09 cervantes
* New build due to change in build number :P (thanks ugha!)
2005-09-10 01:13:49 +00:00
da56d83716 First pass at a new naming system. Probably the last as well. So sad :). 2005-09-09 19:38:43 +00:00
f777e213ce search.i2p 2005-09-08 05:08:33 +00:00
79906f5a7d added search.i2p 2005-09-08 05:01:01 +00:00
54074e76b5 2005-09-07 BarkerJr
* HTML cleanup for the router console (thanks!)
2005-09-07  jrandom
    * Lay the foundation for 'client routers' - the ability for peers to opt
      out of participating in tunnels entirely due to firewall/NAT issues.
      Individual routers have control over where those peers are used in
      tunnels - in outbound or inbound, exploratory or client tunnels, or
      none at all.  The defaults with this build are to simply act as before -
      placing everyone as potential participants in any tunnel.
    * Another part of the foundation includes the option for netDb
      participants to refuse to answer queries regarding peers who are marked
      as unreachable, though this too is disabled by default (meaning the
      routerInfo is retrievable from the netDb).
2005-09-07 22:31:11 +00:00
c2ea8db683 Look for names in privatehosts.txt as well as userhosts.txt and hosts.txt. 2005-09-07 02:07:41 +00:00
744671a518 Adjusted wording on the bandwidth limiter controls to reflect new router defaults 2005-09-07 01:13:56 +00:00
7f5b127bbc fix0rz. 2005-09-06 20:45:21 +00:00
89eff0c628 tyop 2005-09-06 20:35:28 +00:00
177aeebb1c stuff 2005-09-06 20:15:55 +00:00
e0e6bde4a5 throw css around like mad (very minimal stylesheet in place) 2005-09-06 20:05:09 +00:00
e6b145716f allow publishing to a remote archive automatically when posting (optionally) with 0 additional clicks
allow transparently attaching any 'public' pet names in your addressbook to a blog post (with a checkbox)
2005-09-06 03:03:55 +00:00
f958342704 admin page - no more editing config props manually (w3wt) 2005-09-05 20:53:25 +00:00
5a1f738505 2005-09-05 jrandom
* Expose the HTTP headers to EepGet status listeners
    * Handle DSA key failures properly (if the signature is not invertable, it
      is obviously invalid)
2005-09-05 19:29:55 +00:00
8147cdf40c 2005-09-05 jrandom
* Expose the HTTP headers to EepGet status listeners
    * Handle DSA key failures properly (if the signature is not invertable, it
      is obviously invalid)
also, syndie now properly detects whether the remote archive can send a filtered export.zip
by examining the HTTP headers for X-Syndie-Export-Capable: true.  If the remote archive
does not set that header (and neither freesites, nor apache or anything other than the ArchiveServlet will),
it uses individual HTTP requests for individual blog posts and metadata fetches.
2005-09-05 19:27:08 +00:00
6afc64ac39 deal with locations that have : in them (aka http://glog.i2p/archive/archive.txt) 2005-09-05 17:09:19 +00:00
61b8e3598b added rss2.0 support via rss.jsp
rss.jsp can in turn receive all the filters that index.jsp can - e.g. ?blog=blah or ?selector=group://foo,
and by default returns the latest 10 values (overridden with ?wanted=15).  If you want it to pull
with a user's blog's preferences (filters, groups, etc), you can specify ?login=user&password=password
2005-09-05 05:33:33 +00:00
3bb445ff40 better filtering/ignoring
ui improvements (per isamoor's suggestions)
more petname integration
2005-09-05 01:26:19 +00:00
59a8037599 allow exporting eepsite destinations from the syndie database into userhosts.txt (so the eepproxy can get it) 2005-09-05 00:00:11 +00:00
09cb5fad59 allow you to bookmark syndie archives and later recall those bookmarks on the remote page 2005-09-04 22:43:22 +00:00
ee8e45ecf7 allow web based control of who gets to access remote repositories.
if the prop "syndie.remotePassword" is set, users can enter it while viewing their metadata
2005-09-04 21:51:17 +00:00
339868838d thanks BarkerJr :) 2005-09-04 20:26:42 +00:00
c5579fa349 (the filtered blogs may be out of order) 2005-09-04 19:33:00 +00:00
d4a859547c 2005-09-04 jrandom
* Don't persist peer profiles until we are shutting down, as the
      persistence process gobbles RAM and wall time.
    * Bugfix to allow you to check/uncheck the sharedClient setting on the
      I2PTunnel web interface.
    * Be more careful when expiring a failed tunnel message fragment so we
      don't drop the data while attempting to read it.
2005-09-04 19:15:49 +00:00
779aa240d2 added glog.i2p 2005-09-03 04:07:50 +00:00
9aaad00383 0.6.0.5 2005-09-02 19:10:05 +00:00
6422f7ef78 2005-09-02 jrandom
* Don't refuse to send a netDb store if the targetted peer has failed a
      bit (the value was an arbitrary amount).
    * Logging changes
2005-09-02 18:34:14 +00:00
3e51584b3c 0.6.0.4 2005-09-01 20:27:35 +00:00
4ff8a53084 2005-09-01 jrandom
* Don't send out a netDb store of a router if it is more than a few hours
      old, even if someone asked us for it.
2005-09-01 06:55:00 +00:00
ccb73437c4 2005-08-31 jrandom
* Don't publish leaseSets to the netDb if they will never be looked for -
      namely, if they are for destinations that only establish outbound
      streams.  I2PTunnel's 'client' and 'httpclient' proxies have been
      modified to tell the router that it doesn't need to publish their
      leaseSet (by setting the I2CP config option 'i2cp.dontPublishLeaseSet'
      to 'true').
    * Don't publish the top 10 peer rankings of each router in the netdb, as
      it isn't being watched right now.
2005-09-01 00:26:20 +00:00
b43114f61b 2005-08-31 jrandom
* Don't publish leaseSets to the netDb if they will never be looked for -
      namely, if they are for destinations that only establish outbound
      streams.  I2PTunnel's 'client' and 'httpclient' proxies have been
      modified to tell the router that it doesn't need to publish their
      leaseSet (by setting the I2CP config option 'i2cp.dontPublishLeaseSet'
      to 'true').
    * Don't publish the top 10 peer rankings of each router in the netdb, as
      it isn't being watched right now.
2005-09-01 00:20:16 +00:00
9bd87ab511 make it work with any host charset or content charset 2005-08-31 09:50:23 +00:00
b6ea55f7ef more error handling (thanks frosk) 2005-08-30 02:39:37 +00:00
5f18cec97d 2005-08-29 jrandom
* Added the new test Floodfill netDb
2005-08-30 02:04:17 +00:00
3ba921ec0e 2005-08-29 jrandom
* Added the new test Floodfill netDb
2005-08-30 01:59:11 +00:00
e313da254c 2005-08-27 jrandom
* Minor logging and optimization tweaks in the router and SDK
    * Use ISO-8859-1 in the XML files (thanks redzara!)
    * The consolePassword config property can now be used to bypass the router
      console's nonce checking, allowing CLI restarts
2005-08-27 22:46:22 +00:00
8660cf0d74 2005-08-27 jrandom
* Minor logging and optimization tweaks in the router and SDK
    * Use ISO-8859-1 in the XML files (thanks redzara!)
    * The consolePassword config property can now be used to bypass the router
      console's nonce checking, allowing CLI restarts
2005-08-27 22:15:35 +00:00
e0bfdff152 TZ asap 2005-08-25 21:08:13 +00:00
c27aed3603 fix up the entryId calc 2005-08-25 21:07:18 +00:00
cdc6002f0e no message 2005-08-25 21:01:15 +00:00
4cf3d9c1a2 HTTP file upload (rfc 1867) helper 2005-08-25 21:00:09 +00:00
0473e08e21 remote w0rks 2005-08-25 20:59:46 +00:00
346faa3de2 2005-08-24 jrandom
* Catch errors with corrupt tunnel messages more gracefully (no need to
      kill the thread and cause an OOM...)
    * Don't skip shitlisted peers for netDb store messages, as they aren't
      necessarily shitlisted by other people (though they probably are).
    * Adjust the netDb store per-peer timeout based on each particular peer's
      profile (timeout = 4x their average netDb store response time)
    * Don't republish leaseSets to *failed* peers - send them to peers who
      replied but just didn't know the value.
    * Set a 5 second timeout on the I2PTunnelHTTPServer reading the client's
      HTTP headers, rather than blocking indefinitely.  HTTP headers should be
      sent entirely within the first streaming packet anyway, so this won't be
      a problem.
    * Don't use the I2PTunnel*Server handler thread pool by default, as it may
      prevent any clients from accessing the server if the handlers get
      blocked by the streaming lib or other issues.
    * Don't overwrite a known status (OK/ERR-Reject/ERR-SymmetricNAT) with
      Unknown.
2005-08-24 22:55:25 +00:00
5ec6dca64d 2005-08-23 jrandom
* Removed the concept of "no bandwidth limit" - if none is specified, its
      16KBps in/out.
    * Include ack packets in the per-peer cwin throttle (they were part of the
      bandwidth limit though).
    * Tweak the SSU cwin operation to get more accurrate estimates under
      congestions.
    * SSU improvements to resend more efficiently.
    * Added a basic scheduler to eepget to fetch multiple files sequentially.
2005-08-23 22:43:51 +00:00
1a6b49cfb8 2005-08-23 jrandom
* Removed the concept of "no bandwidth limit" - if none is specified, its
      16KBps in/out.
    * Include ack packets in the per-peer cwin throttle (they were part of the
      bandwidth limit though).
    * Tweak the SSU cwin operation to get more accurrate estimates under
      congestions.
    * SSU improvements to resend more efficiently.
    * Added a basic scheduler to eepget to fetch multiple files sequentially.
2005-08-23 21:25:49 +00:00
c7b75df390 Added announcement about the new Irc2P server at irc.freshcoffee.i2p 2005-08-22 13:03:11 +00:00
f97c09291b 0.6.0.3 2005-08-21 19:21:50 +00:00
8f2a5b403c * 2005-08-21 0.6.0.3 released
2005-08-21  jrandom
    * If we already have an established SSU session with the Charlie helping
      test us, cancel the test with the status of "unknown".
2005-08-21 18:39:05 +00:00
ea41a90eae sanity checking 2005-08-21 18:37:57 +00:00
b1dd29e64d added syndie.i2p and syndiemedia.i2p 2005-08-21 18:33:58 +00:00
46e47c47ac ewps 2005-08-21 18:19:22 +00:00
b7bf431f0d [these are not the droids you are looking for] 2005-08-21 18:08:05 +00:00
7f432122d9 added irc.freshcoffee.i2p (new IRC server on the irc2p network) 2005-08-20 01:19:51 +00:00
e7be8c6097 Added references to the new irc2p server: irc.freshcoffee.i2p 2005-08-20 01:18:38 +00:00
adf56a16e1 2005-08-17 jrandom
* Revise the SSU peer testing protocol so that Bob verifies Charlie's
      viability before agreeing to Alice's request.  This doesn't work with
      older SSU peer test builds, but is backwards compatible (older nodes
      won't ask newer nodes to participate in tests, and newer nodes won't
      ask older nodes to either).
2005-08-17 20:16:27 +00:00
11204b8a2b 2005-08-17 jrandom
* Revise the SSU peer testing protocol so that Bob verifies Charlie's
      viability before agreeing to Alice's request.  This doesn't work with
      older SSU peer test builds, but is backwards compatible (older nodes
      won't ask newer nodes to participate in tests, and newer nodes won't
      ask older nodes to either).
2005-08-17 20:05:01 +00:00
cade27dceb added surrender.adab.i2p 2005-08-17 00:42:15 +00:00
5597d28e59 Removing references to irc.duck.i2p, adding references to irc.arcturus.i2p, and replacing current ircProxy default destination string with "irc.postman.i2p,irc.arcturus.i2p" 2005-08-16 09:35:58 +00:00
0502fec432 added terror.i2p 2005-08-15 18:44:04 +00:00
a6714fc2de Adding irc.arcturus.i2p, a new server for the soon-to-be Irc2P network 2005-08-14 15:52:12 +00:00
1219dadbd5 2005-08-12 jrandom
* Keep detailed stats on the peer testing, publishing the results in the
      netDb.
    * Don't overwrite the status with 'unknown' unless we haven't had a valid
      status in a while.
    * Make sure to avoid shitlisted peers for peer testing.
    * When we get an unknown result to a peer test, try again soon afterwards.
    * When a peer tells us that our address is different from what we expect,
      if we've done a recent peer test with a result of OK, fire off a peer
      test to make sure our IP/port is still valid.  If our test is old or the
      result was not OK, accept their suggestion, but queue up a peer test for
      later.
    * Don't try to do a netDb store to a shitlisted peer, and adjust the way
      we monitor netDb store progress (to clear up the high netDb.storePeers
      stat)
2005-08-12 23:54:46 +00:00
77b995f5ed 2005-08-10 jrandom
* Deployed the peer testing implementation to be run every few minutes on
      each router, as well as any time the user requests a test manually.  The
      tests do not reconfigure the ports at the moment, merely determine under
      what conditions the local router is reachable.  The status shown in the
      top left will be "ERR-SymmetricNAT" if the user's IP and port show up
      differently for different peers, "ERR-Reject" if the router cannot
      receive unsolicited packets or the peer helping test could not find a
      collaborator, "Unknown" if the test has not been run or the test
      participants were unreachable, or "OK" if the router can receive
      unsolicited connections and those connections use the same IP and port.
2005-08-10 23:55:40 +00:00
2f53b9ff68 0.6.0.2 2005-08-09 18:55:31 +00:00
d84d045849 deal with full windows without *cough* NPEs
(how many times can I cvs rtag -F before going crazy?)
2005-08-08 21:20:08 +00:00
d8e72dfe48 foo 2005-08-08 20:49:17 +00:00
88b9f7a74c "ERROR [eive on 8887] uter.transport.udp.UDPReceiver: Dropping inbound packet with 1 queued for 1912 packet handlers: Handlers: 3 handler 0 state: 2 handler 1 state: 2 handler 2 state: 2"
state = 2 means all three handlers are blocking on udpReceiver.receive())
this can legitimately happen if the bandwidth limiter or router throttle chokes the receive for >= 1s.
2005-08-08 20:42:13 +00:00
6a19501214 2005-08-08 jrandom
* Add a configurable throttle to the number of concurrent outbound SSU
      connection negotiations (via i2np.udp.maxConcurrentEstablish=4).  This
      may help those with slow connections to get integrated at the start.
    * Further fixlets to the streaming lib
2005-08-08 20:35:50 +00:00
ba30b56c5f 2005-08-07 Complication
* Display the average clock skew for both SSU and TCP connections
2005-08-07  jrandom
    * Fixed the long standing streaming lib bug where we could lose the first
      packet on retransmission.
    * Avoid an NPE when a message expires on the SSU queue.
    * Adjust the streaming lib's window growth factor with an additional
      Vegas-esque congestion detection algorithm.
    * Removed an unnecessary SSU session drop
    * Reduced the MTU (until we get a working PMTU lib)
    * Deferr tunnel acceptance until we know how to reach the next hop,
      rejecting it if we can't find them in time.
    * If our netDb store of our leaseSet fails, give it a few seconds before
      republishing.
2005-08-07 19:31:58 +00:00
a375e4b2ce added more postman services (w3wt) 2005-08-07 19:27:22 +00:00
44fd71e17f added i2p-bt.postman.i2p 2005-08-05 21:20:30 +00:00
b41c378de9 Removed reference and link to Invisiblechat/IIP from the router console greeting page (because IIP's dead, Jim... how many times does it need to be said?) and added irc.postman.i2p. 2005-08-05 19:20:52 +00:00
4ce6b308b3 * 2005-08-03 0.6.0.1 released
2005-08-03  jrandom
    * Backed out an inadvertant change to the netDb store redundancy factor.
    * Verify tunnel participant caching.
    * Logging cleanup
2005-08-03 18:58:12 +00:00
72c6e7d1c5 2005-08-01 duck
* Update IzPack to 3.7.2 (build 2005.04.22). This fixes bug #82.
2005-08-02 03:26:51 +00:00
7ca3f22e77 2005-08-01 duck
* Update IzPack to 3.7.2 (build 2005.04.22)
      This fixes bug #82
2005-08-02 03:25:51 +00:00
59790dafef 2005-08-01 duck
* Fix an addressbook NPE when a new hostname from the master addressbook
      didn't exist in the router addressbook.
    * Fix an addressbook bug which caused subscriptions not to be parsed at
      all. (Oops!)
2005-08-01 13:35:11 +00:00
7227cae6ef 2005-08-01 duck
* Fix an addressbook NPE when a new hostname from the master addressbook
      didn't exist in the router addressbook.
    * Fix an addressbook bug which caused subscriptions not to be parsed at
      all. (Oops!)
2005-08-01 13:34:10 +00:00
03bba51c1e * Fixed some issues with the merge logic that caused addressbooks to be written to disk even when unmodified.
* Fixed a bug that could result in a downloaded remote addressbook not being deleted, halting the update process.
2005-08-01 03:32:37 +00:00
0637050cbc No real reason for eepget to retry, addressbook will try again in an hour, and it makes updates take an absurdly long time. 2005-08-01 00:10:54 +00:00
7f58a68c5a Whoops! Forgot the new build file. I broke cvs! 2005-07-31 22:49:25 +00:00
8120b0397c Move addressbook off URL and on to EepGet. Should no longer leak dns lookups, but now only supports conditional GET with HTTP 1.1. If that's a big problem, it can be fixed in future. 2005-07-31 22:19:10 +00:00
fbe42b7dce Added HTTP 1.1 conditional GET support to EepGet. 2005-07-31 22:17:10 +00:00
def24e34ad 2005-07-31 jrandom
* Adjust the netDb search and store per peer timeouts to match the average
      measured per peer success times, rather than huge fixed values.
    * Optimized and reverified the netDb peer selection / retrieval process
      within the kbuckets.
    * Drop TCP connections that don't have any useful activity in 10 minutes.
    * If i2np.udp.fixedPort=true, never change the externally published port,
      even if we are autodetecting the IP address.
(also includes most of the new peer/NAT testing, but thats not used atm)
2005-07-31 21:35:26 +00:00
593253e6a3 update compilation target 2005-07-31 01:11:12 +00:00
56dd4cb8b5 * added luckypunk.i2p to hosts.txt 2005-07-30 02:27:12 +00:00
10c6f67500 oops 2005-07-28 20:33:27 +00:00
5c1f968afa no message 2005-07-27 20:16:44 +00:00
aaaf437d62 skip properly (DataHelper.read confusion) 2005-07-27 20:15:35 +00:00
a8a866b5f6 * 2005-07-27 0.6 released
2005-07-27  jrandom
    * Enabled SSU as the default top priority transport, adjusting the
      config.jsp page accordingly.
    * Add verification fields to the SSU and TCP connection negotiation (not
      compatible with previous builds)
    * Enable the backwards incompatible tunnel crypto change as documented in
      tunnel-alt.html (have each hop encrypt the received IV before using it,
      then encrypt it again before sending it on)
    * Disable the I2CP encryption, leaving in place the end to end garlic
      encryption (another backwards incompatible change)
    * Adjust the protocol versions on the TCP and SSU transports so that they
      won't talk to older routers.
    * Fix up the config stats handling again
    * Fix a rare off-by-one in the SSU fragmentation
    * Reduce some unnecessary netDb resending by inluding the peers queried
      successfully in the store redundancy count.
2005-07-27 19:03:43 +00:00
aeb8f02269 2005-07-22 jrandom
* Use the small thread pool for I2PTunnelHTTPServer (already used for
      I2PTunnelServer)
    * Minor memory churn reduction in I2CP
    * Small stats update
2005-07-23 00:15:56 +00:00
45767360ab 2005-07-21 jrandom
* Fix in the SDK for a bug which would manifest itself as misrouted
      streaming packets when a destination has many concurrent streaming
      connections (thanks duck!)
    * No more "Graceful shutdown in -18140121441141s"
2005-07-21 22:37:14 +00:00
3563aa2e4d 2005-07-20 jrandom
* Allow the user to specify an external port # for SSU even if the external
      host isn't specified (thanks duck!)
2005-07-20 19:24:47 +00:00
843d5b625a 2005-07-19 jrandom
* Further preparation for removing I2CP crypto
    * Added some validation to the DH key agreement (thanks $anon)
    * Validate tunnel data message expirations (though not really a problem,
      since tunnels expire)
    * Minor PRNG threading cleanup
2005-07-19 21:00:25 +00:00
0f8ede85ca 2005-07-15 cervantes
* Added workaround for an odd win32 bug in the stats configuration
	  console page which meant only the first checkbox selection was saved.

2005-07-15  Romster
	* Added per group selection toggles in the stats configuration console
	  page.
2005-07-16 12:52:35 +00:00
9267d7cae2 more n3ws 2005-07-13 21:59:01 +00:00
dade5a981b 2005-07-13 jrandom
* Fixed a recently injected bug in the multitransport bidding which had
      allowed an essentially arbitrary choice of transports, rather than the
      properly ordered choice.
(getLatency() != getLatencyMs().  duh)
2005-07-13 20:07:31 +00:00
f873cba27e 2005-07-13 jrandom
* Fixed a long standing bug where we weren't properly comparing session
      tags but instead largely depending upon comparing their hashCode,
      causing intermittent decryption errors.
2005-07-13 18:20:43 +00:00
108dec53a5 * mixing a revert and some logging updates... (crosses fingers) 2005-07-12 22:30:13 +00:00
e9592ed400 2005-07-12 jrandom
* Add some data duplication to avoid a recently injected concurrency problem
      in the session tag manager (thanks redzara and romster).
2005-07-12 21:26:07 +00:00
4c230522a2 typo *ahem* 2005-07-12 03:56:42 +00:00
16bd19c6dc added bash.i2p, stats.i2p 2005-07-11 23:23:22 +00:00
b4b6d49d34 ssu testing 2005-07-11 23:16:41 +00:00
9d5f16a889 2005-07-11 jrandom
* Reduced the growth factor on the slow start and congestion avoidance for
      the streaming lib.
    * Adjusted some of the I2PTunnelServer threading to use a small pool of
      handlers, rather than launching off new threads which then immediately
      launch off an I2PTunnelRunner instance (which launches 3 more threads..)
    * Don't persist session keys / session tags (not worth it, for now)
    * Added some detection and handling code for duplicate session tags being
      delivered (root cause still not addressed)
    * Make the PRNG's buffer size configurable (via the config property
      "i2p.prng.totalBufferSizeKB=4096")
    * Disable SSU flooding by default (duh)
    * Updates to the StreamSink apps for better throttling tests.
2005-07-11 23:06:23 +00:00
51c492b842 no message 2005-07-09 23:02:19 +00:00
d3380228ac * you mean 3f != 0x3f? [duh]
* minor cleanups
2005-07-09 22:58:22 +00:00
ad47bf5da3 * moved the inbound partial messages to the PeerState itself, reducing lock contention in the InboundMessageFragments and transparently dropping failed messages when we drop old peer states 2005-07-07 22:27:44 +00:00
76e8631e31 included IV tagging info 2005-07-07 21:16:57 +00:00
f688b9112d 2005-07-05
* Use a buffered PRNG, pulling the PRNG data off a larger precalculated
      buffer, rather than the underlying PRNG's (likely small) one, which in
      turn reduces the frequency of recalcing.
    * More tuning to reduce temporary allocation churn
2005-07-05 22:08:56 +00:00
18d3f5d25d 2005-07-04 jrandom
* Within the tunnel, use xor(IV, msg[0:16]) as the flag to detect dups,
      rather than the IV by itself, preventing an attack that would let
      colluding internal adversaries tag a message to determine that they are
      in the same tunnel.  Thanks dvorak for the catch!
    * Drop long inactive profiles on startup and shutdown
    * /configstats.jsp: web interface to pick what stats to log
    * Deliver more session tags to account for wider window sizes
    * Cache some intermediate values in our HMACSHA256 and BC's HMAC
    * Track the client send rate (stream.sendBps and client.sendBpsRaw)
    * UrlLauncher: adjust the browser selection order
    * I2PAppContext: hooks for dummy HMACSHA256 and a weak PRNG
    * StreamSinkClient: add support for sending an unlimited amount of data
    * Migrate the tests out of the default build jars

2005-06-22  Comwiz
    * Migrate the core tests to junit
2005-07-04 20:44:17 +00:00
440cf2c983 2005-03-23 Comwiz
* Phase 1 of the unit test bounty completed. (The router build script was modified not to build the router
 tests because of a broken dependancy on the core tests. This should be fixed in
 phase 3 of the unit test bounty.)
2005-06-23 02:11:04 +00:00
adeb09576a util/PooledRandomSource.java 2005-06-03 20:23:32 +00:00
fd52bcf8cd added archive.i2p, www.fr.i2p, romster.i2p, marshmallow.i2p, openforums.i2p 2005-05-26 04:51:24 +00:00
c2696bba00 2005-05-25 duck
* Fixed PRNG bug (bugzilla #107)
2005-05-25 21:32:38 +00:00
fef9d57483 removed duplicate manveru.i2p 2005-05-10 05:53:18 +00:00
c250692ef0 added bittorrent.i2p - new home for brittanytracker 2005-05-04 05:52:55 +00:00
2a6024e196 end of first round of ssu testing 2005-05-03 00:53:53 +00:00
835662b3c9 2005-05-01 jrandom
* Added a substantial optimization to the AES engine by caching the
      prepared session keys (duh).
2005-05-02 02:35:16 +00:00
6b5b880ab6 * replaced explicit NACKs and numACKs with ACK bitfields for high congestion links
* increased the maximum number of fragments allowed in a message from 31 to 127,
  reducing the maximum fragment size to 8KB and moving around some bits in the fragment
  info.  This is not backwards compatible.
* removed the old (hokey) congestion control description, replacing it with the TCP-esque
  algorithm implemented
note: the code for the ACK bitfields and fragment info changes have not yet been
implemented, so the old version of this document describes whats going on in the live net.
the new bitfields / fragment info should be deployed in the next day or so (hopefully :)
2005-05-01 20:08:08 +00:00
3de23d4206 2005-05-01 jrandom
* Cleaned up the peers page a bit more.
more udp stuff:
* add new config option: i2np.udp.alwaysPreferred=true to adjust the bidding
  so that UDP is picked first, even if a TCP connection exists
* fixed the initial clock skew problem (duh)
* reduced the MTU to 576 (largest nearly-universally-safe, and allows a
  tunnel message in 2 fragments)
* handle some races @ connection establishment (thanks duck!)
* if there are more ACKs than we can send in a packet, reschedule another
  ACK immediately
2005-05-01 17:21:48 +00:00
ea82f2a8cc oops (thanks newkid!) 2005-05-01 01:35:23 +00:00
b5ad7642bc 2005-04-30 jrandom
* Added a small new page to the web console (/peers.jsp) which contains
      the peer connection information.  This will be cleaned up a lot more
      before 0.6 is out, but its a start.
2005-05-01 00:48:15 +00:00
0fbe84e9f0 2005-04-30 jrandom
* Reduced some SimpleTimer churn
* add hooks for per-peer choking in the outbound message queue - if/when a
  peer reaches their cwin, no further messages will enter the 'active' pool
  until there are more bytes available.  other messages waiting (either later
  on in the same priority queue, or in the queues for other priorities) may
  take that slot.
* when we have a message acked, release the acked size to the congestion
  window (duh), rather than waiting for the second to expire and refill the
  capacity.
* send packets in a volley explicitly, waiting until we can allocate the full
  cwin size for that message
2005-04-30 23:26:18 +00:00
8063889d23 udp updates:
* more stats. including per-peer KBps (updated every second)
* improved blocking/timeout situations on the send queue
* added drop simulation hook
* provide logical RTO limits
2005-04-30 03:14:09 +00:00
6e1ac8e173 added elf.i2p, de-ebooks.i2p, i2pchan.i2p, longhorn.i2p 2005-04-29 22:26:12 +00:00
1b0bb5ea19 2005-04-29 jrandom
* Reduce the peer profile stat coallesce overhead by inlining it with the
      reorganize.
    * Limit each transport to at most one address (any transport that requires
      multiple entry points can include those alternatives in the address).
udp stuff:
* change the UDP transport's style from "udp" to "SSUv1"
* keep track of each peer's skew
* properly handle session reestablishment over an existing session, rather
  than requiring both sides to expire first
2005-04-29 06:24:12 +00:00
4ce51261f1 2005-04-28 jrandom
* More fixes for the I2PTunnel "other" interface handling (thanks nelgin!)
    * Add back the code to handle bids from multiple transports (though there
      is still only one transport enabled by default)
    * Adjust the router's queueing of outbound client messages when under
      heavy load by running the preparatory job in the client's I2CP handler
      thread, thereby blocking additional outbound messages when the router is
      hosed.
    * No need to validate or persist a netDb entry if we already have it
And for some udp stuff:
* only bid on what we know (duh)
* reduceed the queue size in the UDPSender itself, so that ACKs go
  through more quickly, leaving the payload messages to queue up in
  the outbound fragment scheduler
* rather than /= 2 on congestion, /= 2/3 (still AIMD, but less drastic)
* adjust the fragment selector so a wsiz throttle won't force extra
  volleys
* mark congestion when it occurs, not after the message has been
  ACKed
* when doing a round robin over the active messages, move on to the
  next after a full volley, not after each packet (causing less "fair"
  performance but better latency)
* reduced the lock contention in the inboundMessageFragments by
  moving the ack and complete queues to the ACKSender and
  MessageReceiver respectively (each of which have their own
  threads)
* prefer new and existing UDP sessions to new TCP sessions, but
  prefer existing TCP sessions to new UDP sessions
2005-04-28 21:54:27 +00:00
6e34d9b73e added amobius.i2p 2005-04-28 02:11:02 +00:00
6e01637400 added google.i2p 2005-04-27 21:30:53 +00:00
9a96798f9f added mrplod.i2p 2005-04-27 03:58:00 +00:00
c9db6f87d1 2005-04-25 smeghead
* Added button to router console for manual update checks.
    * Fixed bug in configupdate.jsp that caused the proxy port to be updated
      every time the form was submitted even if it hadn't changed.
2005-04-26 02:59:23 +00:00
567ce84e1e * randomized the shitlist duration (still with exponential backoff though)
* fail UDP sessions after two consecutive failed messages in different minutes
* honor UDP reconnections
2005-04-25 16:29:48 +00:00
cde7ac7e52 2005-04-24 jrandom
* Added a pool of PRNGs using a different synchronization technique,
      hopefully sufficient to work around IBM's PRNG bugs until we get our
      own Fortuna.
    * In the streaming lib, don't jack up the RTT on NACK, and have the window
      size bound the not-yet-ready messages to the peer, not the unacked
      message count (not sure yet whether this is worthwile).
    * Many additions to the messageHistory log.
    * Handle out of order tunnel fragment delivery (not an issue on the live
      net with TCP, but critical with UDP).
2005-04-24 18:44:59 +00:00
b2f0d17e94 2005-04-24 jrandom
* Added a pool of PRNGs using a different synchronization technique,
      hopefully sufficient to work around IBM's PRNG bugs until we get our
      own Fortuna.
    * In the streaming lib, don't jack up the RTT on NACK, and have the window
      size bound the not-yet-ready messages to the peer, not the unacked
      message count (not sure yet whether this is worthwile).
    * Many additions to the messageHistory log.
    * Handle out of order tunnel fragment delivery (not an issue on the live
      net with TCP, but critical with UDP).
and for udp stuff:
* implemented tcp-esque rto code in the udp transport
* make sure we don't ACK too many messages at once
* transmit fragments in a simple (nonrandom) order so that we can more easily
  adjust timeouts/etc.
* let the active outbound pool grow dynamically if there are outbound slots to
  spare
* use a simple decaying bloom filter at the UDP level to drop duplicate resent
  packets.
2005-04-24 18:42:02 +00:00
dae6be14b7 I removed those dumb platform specific makefiles. They weren't doing what they ought anyway. If there are platform specific issues, someone please tell me and I'll provide support for it here. Or patch it yourself.
And this is the big "Fix the Parser" patch.  It turns the sam_parse function in src/parse.c into something that actually works.  Generating the argument list from an incoming SAM thingy is a bit memory churn-y; perhaps when I have time I'll replace all those strdups with structures that simply track the (start,end) indices.
Oh and also I moved i2p-ping to the new system.  Which required 0 change in code.  All I did was fix the Makefile, and add shared library libtool support.  Anyway, so enjoy folks.  It's rare I'm this productive
- polecat
2005-04-23 03:28:40 +00:00
20cec857d2 signed with the latest 2005-04-21 16:26:46 +00:00
aum
739f694cfe Node shutdown now uses halt() 2005-04-21 03:10:16 +00:00
aum
84779002fb now builds a working Q console 2005-04-20 21:35:05 +00:00
df926fb60d * 2005-04-20 0.5.0.7 released 2005-04-20 20:14:17 +00:00
a2c7c5a516 2005-04-20 jrandom
* In the SDK, we don't actually need to block when we're sending a message
      as BestEffort (and these days, we're always sending BestEffort).
    * Pass out client messages in fewer (larger) steps.
    * Have the InNetMessagePool short circuit dispatch requests.
    * Have the message validator take into account expiration to cut down on
      false positives at high transfer rates.
    * Allow configuration of the probabalistic window size growth rate in the
      streaming lib's slow start and congestion avoidance phases, and default
      them to a more conservative value (2), rather than the previous value
      (1).
    * Reduce the ack delay in the streaming lib to 500ms
    * Honor choke requests in the streaming lib (only affects those getting
      insanely high transfer rates)
    * Let the user specify an interface besides 127.0.0.1 or 0.0.0.0 on the
      I2PTunnel client page (thanks maestro^!)
(plus minor udp tweaks)
2005-04-20 19:15:25 +00:00
aum
1861379d43 needed for QConsole 2005-04-20 18:54:39 +00:00
aum
408a344aae added QConsole 2005-04-20 18:53:08 +00:00
e9c1ed70d0 added sirup.i2p 2005-04-18 23:27:31 +00:00
916dcca2b0 * build with reference to the i2p.jar/mstreaming.jar/i2ptunnel.jar inline (building as necessary)
* removed unnecessary references to i2ptunnel (though i2ptunnelxmlobject still references i2ptunnelxmlwrapper)
2005-04-18 18:47:22 +00:00
aum
31e81bab17 fixed build failures 2005-04-18 18:12:32 +00:00
aum
6a5170c341 oops, forgot to add earlier 2005-04-18 18:05:50 +00:00
aum
42bff8093c removed obsolete ref to MiniHttpRequestHandlerBase, changed to MiniHttpRequestHandler 2005-04-18 18:04:12 +00:00
aum
d1df94f284 added needed html template files 2005-04-18 18:02:07 +00:00
aum
9cf1744291 restored images in binary mode 2005-04-18 17:24:33 +00:00
aum
f0545c8c9a removed images which were not checked in as binary 2005-04-18 17:22:27 +00:00
aum
ef9ed87d30 binary mode this time 2005-04-18 17:20:10 +00:00
aum
58ffd92a34 dammit, forgot binary mode 2005-04-18 17:19:25 +00:00
aum
418facc7e0 Added apps/q - the Q distributed file store framework, by aum 2005-04-18 17:03:21 +00:00
7f3c953e14 2005-04-17 sirup
* Added the possibility for i2ptunnel client and httpclient instances to
      have their own i2p session (and hence, destination and tunnels).  By
      default, tunnels are shared, but that can be changed on the web
      interface or with the sharedClient config option in i2ptunnel.config.
2005-04-17  jrandom
    * Marked the net.i2p.i2ptunnel.TunnelManager as deprecated.  Anyone use
      this?  If not, I want to drop it (lots of tiny details with lots of
      duplicated semantics).
2005-04-18 02:07:57 +00:00
addab1fa2a 2005-04-17 zzz
* Added new user-editable eepproxy error page templates.
2005-04-17  jrandom
    * Revamp the tunnel building throttles, fixing a situation where the
      rebuild may not recover, and defaulting it to unthrottled (users with
      slow CPUs may want to set "router.tunnel.shouldThrottle=true" in their
      advanced router config)
2005-04-17 23:23:20 +00:00
39343ce957 2005-04-16 jrandom
* Migrated to Bouncycastle's SHA256 and HMAC implementations for efficiency
2005-04-17 01:04:06 +00:00
7389cec78f 2005-04-16 jrandom
* Migrated to Bouncycastle's SHA256 and HMAC implementations for efficiency
(also lots of udp fixes)
2005-04-17 00:59:48 +00:00
9e5fe7d2b6 * fixed some stupid threading issues in the packet handler (duh)
* use the new raw i2np message format (the previous corruptions were due to above)
* add a new test component (UDPFlooder) which floods all peers at the rate desired
* packet munging fix for highly fragmented messages
* include basic slow start code
* fixed the UDP peer rate refilling
* cleaned up some nextSend scheduling
2005-04-16 15:18:09 +00:00
a7dfaee5ac added connelly.i2p 2005-04-13 02:29:59 +00:00
7beb92b1cc First pass of the UDP transport. No where near ready for use, but it does
the basics (negotiate a session and send I2NP messages back and forth).  Lots,
lots more left.
2005-04-12 16:48:43 +00:00
5b56d22da9 2005-04-12 jrandom
* Make sure we don't get cached updates (thanks smeghead!)
    * Clear out the callback for the TestJob after it passes (only affects the
      job timing accounting)
2005-04-12 15:22:11 +00:00
e6b343070a removed copy/paste error 2005-04-09 23:15:53 +00:00
8496b88518 2005-04-08 smeghead
* Added NativeBigInteger benchmark to scripts/i2pbench.sh.
2005-04-09 03:16:05 +00:00
aa542b7876 for implementation simplicity, include fragment size in the SessionConfirmed packets 2005-04-08 23:20:45 +00:00
3f7d46378b * specify exactly what gets in the DSA signatures for the connection establishment
* include a new signedOnTime so that we can prepare the packet at a different moment from
  when we encrypt & send it (also allowing us to reuse that signature on resends for the same
  establishment)
2005-04-08 14:21:26 +00:00
b36def1f72 2005-04-08 smeghead
* Security improvements to TrustedUpdate: signing and verification of the
      version string along with the data payload for signed update files
      (consequently the positions of the DSA signature and version string fields
      have been swapped in the spec for the update file's header); router will
      no longer perform a trusted update if the signed update's version is lower
      than or equal to the currently running router's version.
    * Added two new CLI commands to TrustedUpdate: showversion, verifyupdate.
    * Extended TrustedUpdate public API for use by third party applications.
2005-04-08 12:39:20 +00:00
5a6a3a5e8d oops, forgot to add new eepget script to build 2005-04-08 01:56:02 +00:00
c3bd26d9b4 added wspucktracker.i2p 2005-04-07 20:40:30 +00:00
aum
967e106ee7 fixed one last javadoc err 2005-04-07 04:36:06 +00:00
aum
7c73e59482 Fixed more javadoc errors 2005-04-07 04:26:55 +00:00
aum
03dfa913d1 Removed erroneous @author tag from methods 2005-04-07 04:05:13 +00:00
348e845793 *cough* thanks cervantes 2005-04-06 16:38:38 +00:00
80827c3aad * 2005-04-06 0.5.0.6 released 2005-04-06 15:43:25 +00:00
3b4cf0a024 added 55cancri.i2p 2005-04-06 15:14:00 +00:00
941252fd80 2005-04-05 jrandom
* Retry I2PTunnel startup if we are unable to build a socketManager for a
      client or httpclient tunnel.
    * Add some basic sanity checking on the I2CP settings (thanks duck!)
2005-04-05 22:24:32 +00:00
bc626ece2d 2005-04-05 jrandom
* After a successfull netDb search for a leaseSet, republish it to all of
      the peers we have tried so far who did not give us the key (up to 10),
      rather than the old K closest (which may include peers who had given us
      the key)
    * Don't wait 5 minutes to publish a leaseSet (duh!), and rather than
      republish it every 5 minutes, republish it every 3.  In addition, always
      republish as soon as the leaseSet changes (duh^2).
    * Minor fix for oddball startup race (thanks travis_bickle!)
    * Minor AES update to allow in-place decryption.
2005-04-05 16:06:14 +00:00
400feb3ba7 clarify crypto/hmac usage for simpler implementation 2005-04-05 15:28:54 +00:00
756a4e3995 added a section for congestion control describing what I hope to implement. what
/actually/ gets implemented will be documented further once its, er, implemented
2005-04-04 17:21:30 +00:00
aum
578301240e Added constructors to PrivateKey, PublicKey, SigningPrivateKey and
SigningPublicKey, which take a single String argument and construct
the object from the Base64 data in that string (where this data is
the product of a .toBase64() call on a prior instance).
2005-04-04 06:13:50 +00:00
aum
9b8f91c7f9 Added 'toPublic()' methods to PrivateKey and SigningPrivateKey, such
that these return PublicKey and SigningPublicKey objects, respectively.
2005-04-04 06:01:13 +00:00
c7c389d4fb added eepget wrapper script for *nix 2005-04-03 13:35:52 +00:00
68f7adfa0b *** keyword substitution change *** 2005-04-03 13:33:29 +00:00
c4ac5170c7 2005-04-03 jrandom
* EepGet fix for open-ended HTTP fetches (such as the news.xml
      feeding the NewsFetcher)
2005-04-03 12:50:11 +00:00
32e0c8ac71 updated status blurb 2005-04-03 07:22:28 +00:00
c9c1eae32f 2005-04-01 jrandom
* Allow editing I2PTunnel server instances with five digit ports
      (thanks nickless_head!)
    * More NewsFetcher debugging for reported weirdness
2005-04-01 13:29:26 +00:00
33366cc291 2005-04-01 jrandom
* Fix to check for missing news file (thanks smeghead!)
    * Added destination display CLI:
      java -cp lib/i2p.jar net.i2p.data.Destination privKeyFilename
    * Added destination display to the web interface (thanks pnspns)
    * Installed CIA backdoor
2005-04-01 11:28:06 +00:00
083ac1f125 n3wz0rz 2005-03-31 02:04:18 +00:00
80c6290b89 oh five oh five 2005-03-30 03:27:55 +00:00
6492ad165a Added tracker.fr.i2p 2005-03-30 03:21:18 +00:00
f0d1b1a40e added v2mail.i2p, complication.i2p 2005-03-30 01:49:49 +00:00
17f044e6cd if using numACKs, use a 2 byte value (to handle higher transfer rates) 2005-03-30 00:20:07 +00:00
63f3a9cd7b * 2005-03-29 0.5.0.5 released
2005-03-29  jrandom
    * Decreased the initial RTT estimate to 10s to allow more retries.
    * Increased the default netDb store replication factor from 2 to 6 to take
      into consideration tunnel failures.
    * Address some statistical anonymity attacks against the netDb that could
      be mounted by an active internal adversary by only answering lookups for
      leaseSets we received through an unsolicited store.
    * Don't throttle lookup responses (we throttle enough elsewhere)
    * Fix the NewsFetcher so that it doesn't incorrectly resume midway through
      the file (thanks nickster!)
    * Updated the I2PTunnel HTML (thanks postman!)
    * Added support to the I2PTunnel pages for the URL parameter "passphrase",
      which, if matched against the router.config "i2ptunnel.passphrase" value,
      skips the nonce check.  If the config prop doesn't exist or is blank, no
      passphrase is accepted.
    * Implemented HMAC-SHA256.
    * Enable the tunnel batching with a 500ms delay by default
    * Dropped compatability with 0.5.0.3 and earlier releases
2005-03-30 00:07:36 +00:00
b8ddbf13b4 added lazyguy.i2p 2005-03-28 02:41:19 +00:00
be9bdbfe0f * simplify the MAC construct with a single HMAC (the other setup was an oracle anyway)
* split out the encryption and MAC keys
2005-03-27 22:08:16 +00:00
bc74bf1402 added confessions.i2p, rsync.thetower.i2p, redzara.i2p, gaytorrents.i2p 2005-03-27 01:03:42 +00:00
5c2a57f95a minor cleanup 2005-03-26 09:22:17 +00:00
9cd8cc692e added replay prevention blurb, minor cleanup 2005-03-26 09:19:42 +00:00
ebac4df2d3 2005-03-26 jrandom
* Added some error handling and fairly safe to cache data to the streaming
      lib (good call Tom!)
2005-03-26 07:13:38 +00:00
0626f714c6 speling (thanks cervantes) 2005-03-26 06:23:57 +00:00
21842291e9 *cough* 2005-03-26 05:56:06 +00:00
d461c295f6 first draft of secure semireliable UDP protocol 2005-03-26 05:47:40 +00:00
85b3450525 2005-03-25 jrandom
* Fixed up building dependencies for the routerconsole on some more
      aggressive compilers (thanks polecat!)
2005-03-25 04:07:05 +00:00
aum
75d7c81b7c Oops, forgot the DataFormatException 2005-03-24 08:39:04 +00:00
aum
1433e20f73 Added Destination constructor which accepts/uses a base64 string arg 2005-03-24 08:37:17 +00:00
e614a2f726 * 2005-03-24 0.5.0.4 released 2005-03-24 07:29:27 +00:00
32be7f1fd8 grr 2005-03-24 04:58:28 +00:00
66e1d95a2a *cough* oops 2005-03-24 04:49:15 +00:00
ff03be217e 2005-03-23 jrandom
* Added more intelligent version checking in news.xml, in case we have a
      version newer than the one specified.
2005-03-24 03:18:15 +00:00
a52f8b89dc 2005-03-23 jrandom
* Added support for Transfer-Encoding: chunked to the EepGet, so that the
      cvsweb.cgi doesn't puke on us.
2005-03-24 02:38:10 +00:00
21c7c043b3 Fixed Bugzilla Bug #99 2005-03-24 01:54:23 +00:00
45e6608ad3 Added 'Unit test passed' log message and made test check that Bug #99 is fixed. 2005-03-24 01:50:19 +00:00
28978e3680 Fixed Bug #99: Data pending to be sent is still sent even if STREAM CLOSE is issued. 2005-03-24 01:49:00 +00:00
904f755c8c 2005-03-23 jrandom
* Implemented the news fetch / update policy code, as configurated on
      /configupdate.jsp.  Defaults are to grab the news every 24h (or if it
      doesn't exist yet, on startup).  No action is taken however, though if
      the news.xml specifies that a new release is available, an option to
      update will be shown on the router console.
    * New initialNews.xml delivered with new installs, and moved news.xml out
      of the i2pwww module and into the i2p module so that we can bundle it
      within each update.
2005-03-24 01:19:52 +00:00
a2c309ddd3 2005-03-23 jrandom
* New /configupdate.jsp page for controlling the update / notification
      process, as well as various minor related updates.  Note that not all
      options are exposed yet, and the update detection code isn't in place
      in this commit - it currently says there is always an update available.
    * New EepGet component for reliable downloading, with a CLI exposed in
      java -cp lib/i2p.jar net.i2p.util.EepGet url
    * Added a default signing key to the TrustedUpdate component to be used
      for verifying updates.  This signing key can be authenticated via
      gpg --verify i2p/core/java/src/net/i2p/crypto/TrustedUpdate.java
    * New public domain SHA1 implementation for the DSA code so that we can
      handle signing streams of arbitrary size without excess memory usage
      (thanks P.Verdy!)
    * Added some helpers to the TrustedUpdate to work off streams and to offer
      a minimal CLI:
          TrustedUpdate keygen pubKeyFile privKeyFile
          TrustedUpdate sign origFile signedFile privKeyFile
          TrustedUpdate verify signedFile
2005-03-23 21:13:03 +00:00
aum
677eeac8f7 changed existing 'decodeToString' to public 2005-03-23 06:30:31 +00:00
aum
b232cc0f24 D'oh, .decodeToString was already there, eliminated my vers 2005-03-23 06:26:23 +00:00
aum
18bbae1d1e changed 'String decode(String raw)' to 'String decodeToString(String raw)'
to eliminate name clash.
2005-03-23 06:24:25 +00:00
aum
08ee62b52c Added convenience methods:
- String encode(String raw)
 - String decode(String raw)
2005-03-23 06:21:16 +00:00
5b83aed719 * Added basic trusted update creation/verification 2005-03-22 17:08:01 +00:00
b5875ca07b 2005-03-21 jrandom
* Fixed the tunnel fragmentation handler to deal with multiple fragments
      in a single message properly (rather than release the buffer into the
      cache after processing the first one) (duh!)
    * Added the batching preprocessor which will bundle together multiple
      small messages inside a single tunnel message by delaying their delivery
      up to .5s, or whenever the pending data will fill a full message,
      whichever comes first.  This is disabled at the moment, since without the
      above bugfix widely deployed, lots and lots of messages would fail.
    * Within each tunnel pool, stick with a randomly selected peer for up to
      .5s before randomizing and selecting again, instead of randomizing the
      pool each time a tunnel is needed.
2005-03-22 02:00:10 +00:00
3f9bf28382 2005-03-21 jrandom
* Fixed the tunnel fragmentation handler to deal with multiple fragments
      in a single message properly (rather than release the buffer into the
      cache after processing the first one) (duh!)
    * Added the batching preprocessor which will bundle together multiple
      small messages inside a single tunnel message by delaying their delivery
      up to .5s, or whenever the pending data will fill a full message,
      whichever comes first.  This is disabled at the moment, since without the
      above bugfix widely deployed, lots and lots of messages would fail.
    * Within each tunnel pool, stick with a randomly selected peer for up to
      .5s before randomizing and selecting again, instead of randomizing the
      pool each time a tunnel is needed.
2005-03-22 01:38:21 +00:00
a2bd71c75b * 2005-03-18 0.5.0.3 released
2005-03-18  jrandom
    * Minor tweak to the timestamper to help reduce small skews
    * Adjust the stats published to include only the relevent ones
    * Only show the currently used speed calculation on the profile page
    * Allow the full max # resends to be sent, rather than piggybacking the
      RESET packet along side the final resend (duh)
    * Add irc.postman.i2p to the default list of IRC servers for new installs
    * Drop support for routers running 0.5 or 0.5.0.1 while maintaining
      backwards compatability for users running 0.5.0.2.
2005-03-18 22:34:51 +00:00
89509490c5 2005-03-18 jrandom
* Eepproxy Fix for corrupted HTTP headers (thanks nickster!)
    * Fixed case sensitivity issues on the HTTP headers (thanks duck!)
2005-03-18 08:48:00 +00:00
a997a46040 2005-03-17 jrandom
* Update the old speed calculator and associated profile data points to
      use a non-tiered moving average of the tunnel test time, avoiding the
      freshness issues of the old tiered speed stats.
    * Explicitly synchronize all of the methods on the PRNG, rather than just
      the feeder methods (sun and kaffe only need the feeder, but it seems ibm
      needs all of them synchronized).
    * Properly use the tunnel tests as part of the profile stats.
    * Don't flood the jobqueue with sequential persist profile tasks, but
      instead, inject a brief scheduling delay between them.
    * Reduce the TCP connection establishment timeout to 20s (which is still
      absurdly excessive)
    * Reduced the max resend delay to 30s so we can get some resends in when
      dealing with client apps that hang up early (e.g. wget)
    * Added more alternative socketManager factories (good call aum!)
2005-03-17 22:12:51 +00:00
538dd07e7b 2005-03-16 jrandom
* Adjust the old speed calculator to include end to end RTT data in its
      estimates, and use that as the primary speed calculator again.
    * Use the mean of the high capacity speeds to determine the fast
      threshold, rather than the median.  Perhaps we should use the mean of
      all active non-failing peers?
    * Updated the profile page to sort by tier, then alphabetically.
    * Added some alternative socketManager factories (good call aum!)
2005-03-17 05:29:55 +00:00
046778404e added arkan.i2p, search.i2p, floureszination.i2p, antipiratbyran.i2p
asylum.i2p, templar.i2p
2005-03-16 02:56:01 +00:00
766f83d653 added feedspace.i2p 2005-03-16 02:46:17 +00:00
b20aee6753 2005-03-14 jrandom
* New strict speed calculator that goes off the actual number of messages
      verifiably sent through the peer by way of tunnels.  Initially, this only
      contains the successful message count on inbound tunnels, but may be
      augmented later to include verified outbound messages, peers queried in
      the netDb, etc.  The speed calculation decays quickly, but should give
      a better differential than the previous stat (both values are shown on
      the /profiles.jsp page)
2005-03-15 03:47:14 +00:00
f9aa3aef18 added wiki.fr.i2p 2005-03-14 04:31:55 +00:00
d74aa6e53d (no, this doesnt fix things yet, but its a save point along the path)
2005-03-11  jrandom
    * Rather than the fixed resend timeout floor (10s), use 10s+RTT as the
      minimum (increased on resends as before, of course).
    * Always prod the clock update listeners, even if just to tell them that
      the time hasn't changed much.
    * Added support for explicit peer selection for individual tunnel pools,
      which will be useful in debugging but not recommended for use by normal
      end users.
    * More aggressively search for the next hop's routerInfo on tunnel join.
    * Give messages received via inbound tunnels that are bound to remote
      locations sufficient time (taking into account clock skew).
    * Give alternate direct send messages sufficient time (10s min, not 5s)
    * Always give the end to end data message the explicit timeout (though the
      old default was sufficient before)
    * No need to give end to end messages an insane expiration (+2m), as we
      are already handling skew on the receiving side.
    * Don't complain too loudly about expired TunnelCreateMessages (at least,
      not until after all those 0.5 and 0.5.0.1 users upgrade ;)
    * Properly keep the sendBps stat
    * When running the router with router.keepHistory=true, log more data to
      messageHistory.txt
    * Logging updates
    * Minor formatting updates
2005-03-11 22:23:36 +00:00
ea6fbc7835 added septu.i2p 2005-03-09 20:02:14 +00:00
536e604b8e 2005-03-07 jrandom
* Fix the HTTP response header filter to allow multiple headers with the
      same name (thanks duck and spotteri!)
2005-03-08 02:45:14 +00:00
49d6f5018f * Properly expand the HTTP response header buffer (thanks shendaras!) 2005-03-07 00:40:45 +00:00
4a830e422a added music.i2p, rotten.i2p, wintermute.i2p, kaji2.i2p, aspnet.i2p, gaming.i2p, nntp.i2p 2005-03-07 00:38:19 +00:00
df6c52fe75 * 2005-03-06 0.5.0.2 released
2005-03-06  jrandom
    * Allow the I2PTunnel web interface to select streaming lib options for
      individual client tunnels, rather than sharing them across all of them,
      as we do with the session options.  This way people can (and should) set
      the irc proxy to interactive and the eepproxy to bulk.
    * Added a startRouter.sh script to new installs which simply calls
      "sh i2prouter start".  This should make it clear how people should start
      I2P.
2005-03-07 00:07:27 +00:00
01979c08b3 2005-03-04 jrandom
* Filter HTTP response headers in the eepproxy, forcing Connection: close
      so that broken (/malicious) webservers can't allow persistent
      connections.  All HTTP compliant browsers should now always close the
      socket.
    * Enabled the GZIPInputStream's cache (they were'nt cached before)
    * Make sure our first send is always a SYN (duh)
    * Workaround for some buggy compilers
2005-03-05 02:54:42 +00:00
7928ef83cc added cowsay.i2p 2005-03-04 23:37:39 +00:00
10afe0a060 2005-03-03 jrandom
* Loop while starting up the I2PTunnel instances, in case the I2CP
      listener isn't up yet (thanks detonate!)
    * Implement custom reusable GZIP streams to both reduce memory churn
      and prevent the exposure of data in the standard GZIP header (creation
      time, OS, etc).  This is RFC1952 compliant, and backwards compatible,
      though has only been tested within the confines of I2P's compression use
      (DataHelper.[de]compress).
    * Preemptively support the next protocol version, so that after the 0.5.0.2
      release, we'll be able to drop protocol=2 to get rid of 0.5 users.
2005-03-04 06:09:20 +00:00
ef230cfa3d 2005-03-02 jrandom
* Fix one substantial OOM cause (session tag manager was only dropping
      tags once the critical limit was met, rather than honoring their
      expiration) (duh)
    * Lots of small memory fixes
    * Double the allowable concurrent outstanding tunnel build tasks (20)
2005-03-03 03:36:52 +00:00
2d15a42137 big code cleanup to reduce number of compiler warnings 2005-03-01 23:25:15 +00:00
57d6a2f645 2005-03-01 jrandom
* Really disable the streaming lib packet caching
    * Synchronized a message handling point in the SDK (even though its use is
      already essentially single threaded, its better to play it safe)
    * Don't add new RepublishLeaseSetJobs on failure, just requeue up the
      existing one (duh)
    * Throttle the number of concurrent pending tunnel builds across all
      pools, in addition to simply throttling the number of new requests per
      minute for each pool individually.  This should avoid the cascading
      failure when tunnel builds take too long, as no new builds will be
      created until the previous ones are handled.
    * Factored out and extended the DataHelper's unit tests for dealing with
      long and date formatting.
    * Explicitly specify the HTTP auth realm as "i2prouter", though this
      alone doesn't address the bug where jetty asks for authentication too
      much.  (thanks orion!)
    * Updated the StreamSinkServer to ignore all read bytes, rather than write
      them to the filesystem.
2005-03-01 17:50:52 +00:00
469a0852d7 2005-02-27 jrandom
* Don't rerequest leaseSets if there are already pending requests
    * Reverted the insufficiently tested caching in the DSA/SHA1 impl, and
      temporary disabled the streaming lib packet caching.
    * Reduced the resend RTT penalty to 10s
2005-02-27 22:09:37 +00:00
7983bb1490 1.3 here too 2005-02-27 00:13:00 +00:00
2e7eac02ed 2005-02-26 jrandom
* Force 1.3-isms on the precompiled jsps too (thanks laberhost)
2005-02-27 00:03:42 +00:00
238389fc7f 2005-02-26 jrandom
* Further streaming lib caching improvements
    * Reduce the minimum RTT (used to calculate retry timeouts), but also
      increase the RTT on resends.
    * Lower the default message size to 4KB from 16KB to further reduce the
      chance of failed fragmentation.
    * Extend tunnel rebuild throttling to include fallback rebuilds
    * If there are less than 20 routers known, don't drop the last 20 (to help
      avoid dropping all peers under catastrophic failures)
    * New stats for end to end messages - "client.leaseSetFoundLocally",
      "client.leaseSetFoundRemoteTime", and "client.leaseSetFailedRemoteTime"
2005-02-26 19:16:46 +00:00
4cec9da0a6 2005-02-24 jrandom
* Throttle the number of tunnel rebuilds per minute, preventing CPU
      overload under catastrophic failures (thanks Tracker and cervantes!)
    * Block the router startup process until we've initialized the clock
2005-02-24 23:53:35 +00:00
00f27d4400 2005-02-24 jrandom
* Cache temporary memory allocation in the DSA's SHA1 impl, and the packet
      data in the streaming lib.
    * Fixed a streaming lib bug where the connection initiator would fail the
      stream if the ACK to their SYN was lost.
2005-02-24 18:05:25 +00:00
f61618e4a4 2005-02-23 jrandom
* Now that we don't get stale SAM sessions, it'd be nice if we didn't
      get stale tunnel pools, don't you think?
2005-02-23 21:44:30 +00:00
265d5e306e * 2005-02-23 0.5.0.1 released 2005-02-23 05:00:52 +00:00
10ed058c2e 2005-02-22 jrandom
* Reworked the tunnel (re)building process to remove the tokens and
      provide cleaner controls on the tunnels built.
    * Fixed situations where the timestamper wanted to test more servers than
      were provided (thanks Tracker!)
    * Get rid of the dead SAM sessions by using the streaming lib's callbacks
      (thanks Tracker!)
2005-02-23 04:20:28 +00:00
8a21f0efec 2005-02-22 jrandom
* Temporary workaround for the I2CP disconnect bug (have the streaminglib
      try to automatically reconnect on accept()/connect(..)).
    * Loop check for expired lease republishing (just in case)
2005-02-22 23:13:00 +00:00
b8291ac5a4 2005-02-22 jrandom
* Temporary workaround for the I2CP disconnect bug (have the streaminglib
      try to automatically reconnect on accept()/connect(..)).
    * Loop check for expired lease republishing (just in case)
2005-02-22 22:58:21 +00:00
c17433cb93 2005-02-22 jrandom
* Adjusted (and fixed...) the timestamper change detection
    * Deal with a rare reordering bug at the beginning of a stream (so we
      don't drop it unnecessarily)
    * Cleaned up some dropped message handling in the router
    * Reduced job queue churn when dealing with a large number of tunnels by
      sharing an expiration job
    * Keep a separate list of the most recent CRIT messages (shown on the
      logs.jsp).  This way they don't get buried among any other messages.
    * For clarity, display the tunnel variance config as "Randomization" on
      the web console.
    * If lease republishing fails (boo! hiss!) try it again
    * Actually fix the negative jobLag in the right place (this time)
    * Allow reseeding when there are less than 10 known peer references
    * Lots of logging updates.
2005-02-22 07:07:29 +00:00
35fe7f8203 2005-02-20 jrandom
* Allow the streaming lib resend frequency to drop down to 20s as the
      minimum, so that up to 2 retries can get sent on an http request.
    * Add further limits to failsafe tunnels.
    * Keep exploratory and client tunnel testing and building stats separate.
    * Only use the 60s period for throttling tunnel requests due to transient
      network overload.
    * Rebuild tunnels earlier (1-3m before expiration, by default)
    * Cache the next hop's routerInfo for participating tunnels so that the
      tunnel participation doesn't depend on the netDb.
    * Fixed a long standing bug in the streaming lib where we wouldn't always
      unchoke messages when the window size grows.
    * Make sure the window size never reaches 0 (duh)
2005-02-21 19:08:01 +00:00
21f13dba43 2005-02-20 jrandom
* Allow the streaming lib resend frequency to drop down to 20s as the
      minimum, so that up to 2 retries can get sent on an http request.
    * Add further limits to failsafe tunnels.
    * Keep exploratory and client tunnel testing and building stats separate.
    * Only use the 60s period for throttling tunnel requests due to transient
      network overload.
    * Rebuild tunnels earlier (1-3m before expiration, by default)
    * Cache the next hop's routerInfo for participating tunnels so that the
      tunnel participation doesn't depend on the netDb.
    * Fixed a long standing bug in the streaming lib where we wouldn't always
      unchoke messages when the window size grows.
    * Make sure the window size never reaches 0 (duh)
2005-02-21 18:02:14 +00:00
0db239a3fe added irc.postman.i2p 2005-02-21 03:13:40 +00:00
4745d61f9b added subrosa.i2p 2005-02-21 02:55:12 +00:00
b9a4c3ba52 *cough* 2005-02-20 11:09:05 +00:00
cbf6a70a1a 2005-02-20 jrandom
* Only build failsafe tunnels if we need them
    * Properly implement the selectNotFailingPeers so that we get a random
      selection of peers, rather than using the strictOrdering (thanks dm!)
    * Don't include too many "don't tell me about" peer references in the
      lookup message - only send the 10 peer references closest to the target.
2005-02-20 09:12:43 +00:00
7d4e093b58 2005-02-19 jrandom
* Only build new extra tunnels on failure if we don't have enough
    * Fix a fencepost in the tunnel building so that e.g. a variance of
      2 means +/- 2, not +/- 1 (thanks dm!)
    * Avoid an NPE on client disconnect
    * Never select a shitlisted peer to participate in a tunnel
    * Have netDb store messages timeout after 10s, not the full 60s (duh)
    * Keep session tags around for a little longer, just in case (grr)
    * Cleaned up some closing event issues on the streaming lib
    * Stop bundling the jetty 5.1.2 and updated wrapper.config in the update
      so that 0.4.* users will need to do a clean install, but we don't need
      to shove an additional 2MB in each update to those already on 0.5.
    * Imported the susimail css (oops, thanks susi!)
2005-02-19 23:20:56 +00:00
d27feabcb3 clear the old precompiled .java files (thanks duck!) 2005-02-18 16:56:46 +00:00
0d9efa17de bah, fuck it. we can deal with a little shitlisting 2005-02-18 16:04:51 +00:00
b125b04c8d 0.5 released 2005-02-18 15:58:20 +00:00
0539f1d794 updated for the new props 2005-02-18 15:35:02 +00:00
a4b6709f02 added moxonom.i2p 2005-02-18 15:27:46 +00:00
f2db143a6f Refuse to load 0.4 routerInfo (to help weed out the old ones) 2005-02-18 15:25:25 +00:00
b615f54d41 *cough* 2005-02-18 08:28:56 +00:00
db2328e03e * actually reseed properly
* hide the susimail deprecation warnings
* dont push hosts.txt in the update (people can subscribe if they want to)
2005-02-18 08:12:40 +00:00
e2071935ad added sex0r.i2p flock.i2p cneal.i2p www.nntp.i2p wallsgetbombed.i2p
thedarkside.i2p legion.i2p manveru.i2p books.manveru.i2p bt.i2p
2005-02-18 06:23:29 +00:00
1c40ff773f fproxy.i2p and www1.squid.i2p are back (yay!) 2005-02-18 00:52:26 +00:00
37a3645663 Default subscriptions shouldn't rely on a pre-existing hosts.txt. 2005-02-18 00:50:18 +00:00
eb8accd1e0 damn those copyright laws 2005-02-17 23:59:52 +00:00
3af97894b4 tyop 2005-02-17 23:45:50 +00:00
15a0dcf4d8 (not yet tagging this 0.5, but I don't think there's anytihng left)
2005-02-17  jrandom
    * If the clock is adjusted during a job run, don't act as if the job took
      negative time.
2005-02-17 22:57:53 +00:00
aa3a44c42a 2005-02-17 jrandom
* Included the GPL'ed susimail 0.13 by default (thanks susi23!)
2005-02-17 20:55:07 +00:00
40f4b47b87 initial vanilla import of susimail 0.13 (no build script yet) 2005-02-17 20:08:53 +00:00
dca09d96b3 logging 2005-02-17 19:49:16 +00:00
dd10747460 2005-02-17 jrandom
* Fixed the braindead tunnel testing logic
    * If a large number of tunnels are failing (within the last 5-10 minutes)
      and the current tunnel pool's configuration allows it, randomly build a
      zero hop tunnel to replace failed tunnels.
    * Enable postman's POP3 and SMTP tunnels by default
2005-02-17 17:59:27 +00:00
77176162af 2005-02-16 jrandom
* Added some error handling when the number of session tags exceeds the
      realistic capacity, dropping a random chunk of received tag sets and
      conducting some minor analysis of the remaining ones.  This is a part
      of a pretty serious error condition, and logs as CRIT (if/when people
      see "TOO MANY SESSION TAGS!", please let me know the full log line it
      puts in the wrapper.log or /logs.jsp)
    * Update the addressbook to only write to the published hosts location
      if the addressbook's config contains "should_publish=true" (by default,
      it contains "should_publish=false")
2005-02-17 04:08:34 +00:00
8b9ee4dfd7 updated to reflect what was implemented 2005-02-17 00:48:18 +00:00
6e8e77b9ec 0.5 merging 2005-02-16 22:43:00 +00:00
7ef9ce8cc6 0.5 merging 2005-02-16 22:37:24 +00:00
9646ac2911 continuing 0.5 merges 2005-02-16 22:35:12 +00:00
566a713baa 2005-02-16 jrandom
* (Merged the 0.5-pre branch back into CVS HEAD)
    * Replaced the old tunnel routing crypto with the one specified in
      router/doc/tunnel-alt.html, including updates to the web console to view
      and tweak it.
    * Provide the means for routers to reject tunnel requests with a wider
      range of responses:
        probabalistic rejection, due to approaching overload
        transient rejection, due to temporary overload
        bandwidth rejection, due to persistent bandwidth overload
        critical rejection, due to general router fault (or imminent shutdown)
      The different responses are factored into the profiles accordingly.
    * Replaced the old I2CP tunnel related options (tunnels.depthInbound, etc)
      with a series of new properties, relevent to the new tunnel routing code:
        inbound.nickname (used on the console)
        inbound.quantity (# of tunnels to use in any leaseSets)
        inbound.backupQuantity (# of tunnels to keep in the ready)
        inbound.length (# of remote peers in the tunnel)
        inbound.lengthVariance (if > 0, permute the length by adding a random #
                                up to the variance.  if < 0, permute the length
                                by adding or subtracting a random # up to the
                                variance)
        outbound.* (same as the inbound, except for the, uh, outbound tunnels
                    in that client's pool)
      There are other options, and more will be added later, but the above are
      the most relevent ones.
    * Replaced Jetty 4.2.21 with Jetty 5.1.2
    * Compress all profile data on disk.
    * Adjust the reseeding functionality to work even when the JVM's http proxy
      is set.
    * Enable a poor-man's interactive-flow in the streaming lib by choking the
      max window size.
    * Reduced the default streaming lib max message size to 16KB (though still
      configurable by the user), also doubling the default maximum window
      size.
    * Replaced the RouterIdentity in a Lease with its SHA256 hash.
    * Reduced the overall I2NP message checksum from a full 32 byte SHA256 to
      the first byte of the SHA256.
    * Added a new "netId" flag to let routers drop references to other routers
      who we won't be able to talk to.
    * Extended the timestamper to get a second (or third) opinion whenever it
      wants to actually adjust the clock offset.
    * Replaced that kludge of a timestamp I2NP message with a full blown
      DateMessage.
    * Substantial memory optimizations within the router and the SDK to reduce
      GC churn.  Client apps and the streaming libs have not been tuned,
      however.
    * More bugfixes thank you can shake a stick at.

2005-02-13  jrandom
    * Updated jbigi source to handle 64bit CPUs.  The bundled jbigi.jar still
      only contains 32bit versions, so build your own, placing libjbigi.so in
      your install dir if necessary.  (thanks mule!)
    * Added support for libjbigi-$os-athlon64 to NativeBigInteger and CPUID
      (thanks spaetz!)
2005-02-16 22:23:47 +00:00
36f7e98e90 added riaa.i2p 2005-02-16 14:03:22 +00:00
4da755816a added mpaa.i2p 2005-02-15 03:08:45 +00:00
3ef0258faf file TrivialRouterPreprocessor.java was initially added on branch i2p_0_5_pre_branch. 2005-02-14 22:15:20 +00:00
293ceaee93 2005-02-10 smeghead
* Initial check-in of Pants, a new utility to help us manage our 3rd-party
      dependencies (Fortuna, Jetty, Java Service Wrapper, etc.). Some parts of
      Pants are still non-functional at this time so don't mess with it yet
      unless you want to potentially mangle your working copy of CVS.
2005-02-11 02:44:47 +00:00
7b58d0fa0f Allow an unneeded newline in the SAM client protocol without disconnecting. 2005-02-09 19:28:29 +00:00
bd68c1e056 file configtunnels.jsp was initially added on branch i2p_0_5_pre_branch. 2005-02-09 18:15:53 +00:00
200162d973 file ConfigTunnelsHelper.java was initially added on branch i2p_0_5_pre_branch. 2005-02-09 18:15:52 +00:00
4b37a53f1c file TunnelHelper.java was initially added on branch i2p_0_5_pre_branch. 2005-02-09 13:29:02 +00:00
2d41de7ae0 Restore original method of filtering names with non .i2p tlds 2005-02-09 02:21:43 +00:00
bc5bc62c18 file CachingByteArrayOutputStream.java was initially added on branch i2p_0_5_pre_branch. 2005-02-08 22:22:15 +00:00
a0d680024e added pants.i2p 2005-02-08 21:11:24 +00:00
45013feea7 file DateMessage.java was initially added on branch i2p_0_5_pre_branch. 2005-02-08 13:28:51 +00:00
2abbe992dd file BloomFilterIVValidator.java was initially added on branch i2p_0_5_pre_branch. 2005-02-08 12:23:29 +00:00
d7081b3eeb file KeySelector.java was initially added on branch i2p_0_5_pre_branch. 2005-02-07 20:41:46 +00:00
a2f5289bd9 file DecayingBloomFilter.java was initially added on branch i2p_0_5_pre_branch. 2005-02-07 20:41:45 +00:00
b366a4b942 2005-02-07 jrandom
* Fixed a race in the streaming lib's delayed flush algorithm (thanks anon!)
2005-02-07 10:04:23 +00:00
27e92653fe 2005-02-06 Sugadude
* Added a filter to the addressbook to remove entries that dont end in ".i2p"
(thanks Sugadude!)
2005-02-06 22:14:46 +00:00
80120b7b7d added entropy feeding interface, and hooked it up to the end of the DH exchange (source=DH) as well as the end of the ElGamal/AES decrypt (source=ElG/AES). the default RandomSource ignores this data 2005-02-06 08:38:07 +00:00
af8a618826 added irc.carambar.i2p 2005-02-04 17:49:10 +00:00
af0e554562 file PooledTunnelCreatorConfig.java was initially added on branch i2p_0_5_pre_branch. 2005-02-04 07:31:42 +00:00
382cbb18db 2005-02-03 smeghead
* Added Ant buildfile in apps/fortuna for creating a custom Fortuna PRNG jar
      library from GNU Crypto's CVS HEAD sources.
2005-02-03 13:39:46 +00:00
252b523155 file TunnelPoolManager.java was initially added on branch i2p_0_5_pre_branch. 2005-02-01 13:37:30 +00:00
4303b3b716 file TunnelPoolSettings.java was initially added on branch i2p_0_5_pre_branch. 2005-02-01 13:37:29 +00:00
87715dc21a file DummyValidator.java was initially added on branch i2p_0_5_pre_branch. 2005-02-01 13:37:28 +00:00
8552494fc1 file TunnelGatewayMessage.java was initially added on branch i2p_0_5_pre_branch. 2005-02-01 13:37:25 +00:00
1c2290b613 added general.i2p 2005-01-28 22:30:47 +00:00
5f6060b801 2005-01-26 smeghead
* i2pProxy.pac, i2pbench.sh, and i2ptest.sh are now shipped with the dist
      packages and installed to $i2pinstalldir/scripts.
    * Added command line params to i2ptest.sh and i2pbench.sh: --gij to run them
      using gij + libgcj, and --sourcedir to run them from the source tree
      instead of the installation directory.
    * Fixed unreachable for() statement clause in the KBucketImpl class that was
      causing gcj to toss a compilation warning (jrandom++).
2005-01-27 04:48:41 +00:00
b39958604d added smeghead.i2p 2005-01-27 01:35:24 +00:00
22ca1491bc 2005-01-26 smeghead
* Added a couple of scripts, i2ptest.sh and i2pbench.sh, to manage the core
      tests and benchmarks.
    * Routerconsole now builds under gcj 3.4.3.
    * Corrected divide by zero error in TunnelId class under gcj (jrandom++).
2005-01-27 00:21:10 +00:00
690d7e30cf added nntp.fr.i2p (w00t) 2005-01-26 22:07:53 +00:00
4fac2f1094 2005-01-25 smeghead
* Tweaked some classes to enable gcj 3.4.3 to compile the router and
      supporting apps (except for the routerconsole which is still being
      investigated).
2005-01-26 06:29:17 +00:00
eb0935d577 added deadgod.i2p 2005-01-26 04:32:00 +00:00
425fedf55b outbound tunnels passing tests, now to start hacking on the tie-in 2005-01-25 21:42:25 +00:00
a33de09ae6 * implemented fragmentation
* added more inbound tests
* made the tunnel preprocessing header more clear and included better fragmentation support
(still left: tests for outbound tunnel processing, structures and jobs to integrate with the router,
remove that full SHA256 from each and every I2NPMessage or put a smaller one at the
transport layer, and all the rest of the tunnel pooling/building stuff)
2005-01-25 05:46:22 +00:00
5018e56103 oops, moving README and sam-sharp.build out of the source directory 2005-01-24 23:43:37 +00:00
de2c975ac2 2005-01-24 smeghead
* C#-ification of sam-sharp: interface greatly simplified using delegates
      and events; SamBaseEventHandler provides basic implementation and helper
      methods but is now optional.
    * NAnt buildfile and README added for sam-sharp.
2005-01-24 22:42:05 +00:00
d86e2c0f59 2005-01-23 smeghead
* Port the java SAM client library to mono/C# and released into the
      public domain.  The 0.1 version of this port is available in CVS as
      i2p/apps/sam/csharp/src/I2P.SAM.Client.  The other nonfunctional C#
      library has been removed.
2005-01-23 08:22:11 +00:00
14023163b3 added manveru.i2p 2005-01-23 05:09:34 +00:00
d85dc8213e 2005-01-21 Jhor
* Updated jbigi build scripts for OSX.
2005-01-21  jrandom
    * Added support for OSX to the NativeBigInteger code so that it will look
      in the classpath for libjbigi-osx-none.jnilib.  At the moment, that file
      is not bundled with the shipped jbigi.jar yet though.
2005-01-22 01:53:02 +00:00
f6a34055ac removed the tunnel.html-style tunnel encryption and implemented the new tunnel-alt.html style
still much to be done beyond this, but this stuff turned out quite trivial (w00t)
2005-01-21 07:54:56 +00:00
3beb0d9c12 added fr.i2p 2005-01-20 11:53:06 +00:00
60968fe6f1 added imhotep.i2p 2005-01-20 08:57:16 +00:00
517c3101c7 added jrandom.dev.i2p 2005-01-20 02:51:31 +00:00
998f03ba68 killed the loops and the PRNGs by having the tunnel participants themselves specify what
tunnel ID they listen on and make sure the previous peer doesn't change over time.  The
worst that a hostile peer could do is create a multiplicative work factor - they send N
messages, causing N*#hops in the loop of bandwidth usage.  This is identical to the hostile
peer simply building a pair of tunnels and sending N messages through them.
also added some discussion about the tradeoffs and variations wrt fixed size tunnel messages.
2005-01-19 23:13:10 +00:00
f3b0e0cfc7 we want to use E on the preIV, not HMAC - must be invertible (duh, thanks Connelly)
adjusted preIV size accordingly, and definitely use a delivered layerIVKey
2005-01-19 06:24:25 +00:00
a65e6c888c 2005-01-18 jrandom
* Increased the max # session tags maintained and decreased slightly the
      period over which they are gathered.
2005-01-19 00:08:13 +00:00
cd939d3379 speling mistaces 2005-01-18 16:21:12 +00:00
29e5aeff5c include the preIV in the verification hash 2005-01-18 16:01:55 +00:00
0e5cf81fca updates with new alternative crypto, including Connelly's suggestions for the IV 2005-01-18 15:55:17 +00:00
61f217c610 2005-01-17 jrandom
* Added meaningful support for adjusting the preferred message size in the
      streaming lib by setting the i2p.streaming.maxMessageSize=32768 (or
      whatever).  The other side will mimic a reduction (but never an increase).
    * Always make sure to use distinct ConnectionOption objects for each
      connection (duh)
    * Reduced the default ACK delay to 500ms on in the streaming lib
    * Only shrink the streaming window once per window
    * Don't bundle a new jetty.xml with updates
    * Catch another local routerInfo corruption issue on startup.
2005-01-17 08:15:00 +00:00
ccb1f491c7 use the first 16 bytes of the SHA256 for the columns & verification block, rather than all 32 bytes.
(AES won't let us go smaller.  oh well)
2005-01-16 06:07:06 +00:00
49fdac9b4e added ttp.i2p 2005-01-16 04:45:36 +00:00
6b6a9490f6 include blurb explaining tunnelIDs and replay prevention (thanks Connelly!) 2005-01-16 00:08:14 +00:00
2c783e9876 2005-01-15 cervantes
* Added support to the eepproxy for URLs such as
      http://localhost:4444/eepproxy/foo.i2p/bar/baz or even
      http://localhost:4444/eepproxy/foo.i2p/?i2paddresshelper=base64
2005-01-15 23:16:12 +00:00
ecd971c0e5 2005-01-15 jrandom
* Caught a series of (previously unhandled) errors caused by requeueing
      messages that had timed out on the TCP transport (thanks mae^!)
    * Reduce the barrier to dropping session tags on streaming lib resends -
      every fourth send should drop the tags, forcing ElGamal encryption.  This
      will help speed up the recovery after a disconnect, rather than the drop
      every fifth send.
2005-01-15 21:03:14 +00:00
c48875a6fb cbc, nimwit 2005-01-15 06:43:35 +00:00
a245ccb8b7 added freenet.eco.i2p, tracker.i2p, photo.i2p 2005-01-15 05:52:09 +00:00
75a18debcb forgot to update the processing xor 2005-01-15 03:53:13 +00:00
1a15d3bb55 filled in the tunnel building alternatives, throttling techniques, and mixing (meta)details 2005-01-15 00:06:40 +00:00
ffdcae47e3 add some whitening to the IV as it goes down the path 2005-01-14 22:43:43 +00:00
34a2bc8590 added hopekiller.i2p, microsoft.i2p, jhor.i2p, badtoys.i2p 2005-01-13 19:36:03 +00:00
8ae4d00ccb added mindspore.i2p 2005-01-13 19:31:06 +00:00
9ed6d5e7fb added irc.ircbnc.i2p (connect to it as an irc client, pass 'testpass' (may be changed/removed), and outproxy to irc servers) 2005-01-13 19:23:06 +00:00
c9243b241c added dvdr-core.i2p 2005-01-13 17:11:17 +00:00
9c364a64e3 more arm waiving wrt the tunnel building 2005-01-13 00:57:36 +00:00
b34306205c lets just get some visual versioning clues 2005-01-12 19:22:40 +00:00
77f778dbf9 Updated the crypto so that peer0 is the gateway (meaning max hop length is 8, not 9).
This prevents the first peer after the gateway from looking at the encrypted data received
and seeing "hey, none of the checksum blocks match the payload, they must be the gateway".
2005-01-12 19:09:00 +00:00
23fa4e4161 Made userhosts.txt the default master addressbook, and hosts.txt the default router addressbook (mostly just testing if this will commit properly) 2005-01-12 04:26:33 +00:00
5b6fd0b829 dont wannit 2005-01-12 03:49:05 +00:00
8fa8d7739f work in progress, but i want it in cvs so i dont lose it again 2005-01-09 23:01:34 +00:00
dc552c7a29 html fix (just to clarify that K[i] isn't actually *transmitted*) 2005-01-07 23:15:38 +00:00
cf84f453d3 Initial implementation of the new tunnel encryption code. Still much more work to be
done (e.g. *what* gets encrypted, modifying the tunnelCreate messages, the tunnel
building process, and the new tunnel pooling).  I seem to have lost much of the typed
up docs describing this too, so I'll be hitting that next.
2005-01-07 22:55:30 +00:00
daf32a24bc * 2005-01-06 0.4.2.6 released
2005-01-06  jrandom
    * Added a startup message to the addressbook, printing its version number
      to stdout (which is sent to wrapper.config) when it loads.
    * Updated the addressbook to reread the config file periodically
    * Added orion.i2p to the list of eepsites on the default homepage
2005-01-06 20:59:13 +00:00
34ecfd9857 added j.i2p 2005-01-06 11:56:35 +00:00
4838564460 2005-01-05 jrandom
* Handle unexpected network read errors more carefully (thanks parg!)
    * Added more methods to partially compare (DataHelper) and display
      arrays (Base64.encode).
    * Exposed the AES encryptBlock/decryptBlock on the context.aes()
    * Be more generous on the throttle when just starting up the router
    * Fix a missing scheduled event in the streaming lib (caused after reset)
    * Add a new DisconnectListener on the I2PSocketManager to allow
      notification of session destruction.
    * Make sure our own router identity is valid, and if it isn't, build a new
      one and restart the router.  Alternately, you can run the Router with
      the single command line argument "rebuild" and it will do the same.
2005-01-06 00:17:53 +00:00
3dd2f67ff3 added bl.i2p 2005-01-05 23:16:35 +00:00
eco
0ccec3dde0 Added log entry for bt1.eco.i2p and jap.eco.i2p removal. Fix typo in numbering of previous log entry. 2005-01-04 12:29:12 +00:00
eco
ad77879caa removed jap.eco.i2p and bt1.eco.i2p (obsolete) 2005-01-04 12:22:38 +00:00
27999983cc added chat.i2p 2005-01-03 02:36:32 +00:00
48b039940d added phonebooth.i2p 2005-01-01 05:18:01 +00:00
84dc7d9d82 2004-12-31 ragnarok
* Integrated latest addressbook changes (2.0.3) which include support for
      deploying as a .war file with no existing addressbook configuration.
    * Updated main build process to bundle the addressbook.war in the
      i2pinstall.jar and i2pupdate.zip.
2005-01-01 00:57:01 +00:00
70d6332bad 2004-12-31 jrandom
* Speling fxi (thanks digum!)
    * Bugfix for the I2PTunnel web interface so that it now properly launches
      newly added tunnels that are defined to be run on startup (thanks ugha!)
2004-12-31 17:18:05 +00:00
aec0b0c86a 2004-12-30 jrandom
* Revised the I2PTunnel client and httpclient connection establishment
      throttles.  There is now a pool of threads that build the I2PSocket
      connections with a default size of 5, configurable via the I2PTunnel
      client option 'i2ptunnel.numConnectionBuilders' (if set to 0, it will
      not throttle the number of concurrent builders, but will launch a thread
      per socket during establishment).  In addition, sockets accepted but
      not yet allocated to one of the connection builders will be destroyed
      after 30 seconds, configurable via 'i2ptunnel.maxWaitTime' (if set to
      0, it will wait indefinitely).
2004-12-30 22:51:16 +00:00
099f6a88c2 2004-12-29 jrandom
* Imported Ragnarok's addressbook source (2.0.2) which is built but not
      deployed in the i2pinstall.jar/i2pupdate.zip (yet).
    * Don't treat connection inactivity closure as a connection error.
2004-12-29 22:16:42 +00:00
00a5d42d3d forgot to import these... 2004-12-29 20:51:43 +00:00
28f4a2cb67 imported ragnarok's MIT licensed addressbook-2.0.2 2004-12-29 20:28:20 +00:00
1ac18ba10e 2004-12-29 jrandom
* Add in a new keepalive event on each TCP connection, proactively sending
      a (tiny) time message every minute or two, as well as killing the
      connection if no message has been fully sent within 5 minutes or so.
      This should help deal with hung connections from IP address changes.
2004-12-29 20:06:43 +00:00
1503ee2dfa 2004-12-28 jrandom
* Cleaned up the resending and choking algorithm in the streaming lib.
    * Removed the read timeout override for I2PTunnel's httpclient, allowing
      it to use the default for the streaming lib.
    * Revised ack triggers in the streaming lib.
    * Logging.
2004-12-29 15:53:28 +00:00
484b528d4f * 2004-12-21 0.4.2.5 released
2004-12-21  jrandom
    * Track a new stat for expired client leases (client.leaseSetExpired).
2004-12-21 18:23:03 +00:00
758293dc02 2004-12-21 jrandom
* Cleaned up the postinstall/startup scripts a bit more to handle winME,
      and added windows info to the headless docs. (thanks ardvark!)
    * Fixed a harmless (yet NPE inspiring) race during the final shutdown of
      a stream (thanks frosk!)
    * Add a pair of new stats for monitoring tunnel participation -
      tunnel.participatingBytesProcessed (total # bytes transferred) and
      tunnel.participatingBytesProcessedActive (total # bytes transferred for
      tunnels whose byte count exceed the 10m average).  This should help
      further monitor congestion issues.
    * Made the NamingService factory property public (thanks susi!)
2004-12-21 16:32:49 +00:00
6cb316b33e 2004-12-20 jrandom
* No longer do a blocking DNS lookup within the jobqueue (thanks mule!)
    * Set a 60s dns cache TTL, instead of 0s.  Most users who used to use
      dyndns/etc now just use IP autodetection, so the old "we need ttl=0"
      reasoning is gone.
2004-12-20 05:14:56 +00:00
1d31831e7d added up.i2p 2004-12-20 02:40:45 +00:00
ee32b07995 2004-12-19 jrandom
* Fix for a race on startup wrt the new stats (thanks susi!)
2004-12-19 18:55:09 +00:00
81f04ca692 2004-12-19 jrandom
* Added three new stats - router.activePeers, router.fastPeers, and
      router.highCapacityPeers, updated every minute
2004-12-19 16:27:10 +00:00
1756997608 2004-12-19 jrandom
* Added a new i2ptunnel type: 'httpserver', allowing you to specify what
      hostname should be sent to the webserver.  By default, new installs will
      have an httpserver pointing at their jetty instance with the spoofed
      name 'mysite.i2p' (editable on the /i2ptunnel/edit.jsp page).
2004-12-19 11:04:56 +00:00
ec11ea4ca7 * Convert native jcpuid code from C++ to C. This should alleviate build
problems experienced by some users.
2004-12-19 06:25:27 +00:00
a1ebf85e1b added dm.i2p 2004-12-18 06:31:22 +00:00
4b2a734cda * 2004-12-18 0.4.2.4 released 2004-12-18 04:07:13 +00:00
97ae8f78a0 added piespy.i2p 2004-12-18 03:11:03 +00:00
834665c3ba 2004-12-16 jrandom
* Catch another oddball case for a reset connection in the streaming lib.
    * Add a dumpprofile.jsp page, called with ?peer=base64OfPeerHash, which
      dumps the current state of that peer's profile.  Instead of the full
      base64, you can pass in however many characters you have and it will
      return the first match found.
2004-12-16 10:32:26 +00:00
d969dd2d8d 2004-12-16 jrandom
* Catch another oddball case for a reset connection in the streaming lib.
    * Add a dumpprofile.jsp page, called with ?peer=base64OfPeerHash, which
      dumps the current state of that peer's profile.  Instead of the full
      base64, you can pass in however many characters you have and it will
      return the first match found.
2004-12-16 10:21:23 +00:00
3cb727561c uugly stat dumper. call via /dumpstats.jsp?peer=routerIdentHash 2004-12-16 09:45:31 +00:00
cbc89376d3 2004-12-16 jrandom
* Remove the randomized factor in the tunnel rejection by bandwidth -
      we now accept the request if we've allocated less than our limit
      and reject it if we've allocated more.
    * Stick to the standard capacity scale on tunnel rejection, even for
      the 10m period.
    * Build the time message at the very last possible moment
2004-12-16 05:42:03 +00:00
66aa29e3d4 2004-12-15 jrandom
* Handle hard disconnects more gracefully within the streaming lib, and
      log unmonitored events more aggressively.
    * If we drop a peer after connection due to clock skew, log it to the
      /logs.jsp#connectionlogs with relevent info.  In addition, toss it in
      the stat 'tcp.disconnectAfterSkew'.
    * Fixed the formatting in the skew display
    * Added an ERROR message that is fired once after we run out of
      routerInfo files (thanks susi!)
    * Set the connect timeout equal to the streaming lib's disconnect timeout
      if not already specified (the I2PTunnel httpclient already enforces a
      60s connect timeout)
    * Fix for another connection startup problem in the streaming lib.
    * Fix for a stupid error in the probabalistic drop (rand <= P, not > P)
    * Adjust the capacity calculations so that tunnel failures alone in the
      last 10m will not trigger a 0 capacity rank.
2004-12-16 02:45:55 +00:00
5c72aca5ee added sciencebooks.i2p 2004-12-15 04:23:16 +00:00
8824815d6d 2004-12-14 jrandom
* Periodically send a message along all I2NP connections with the router's
      current time, allowing the receiving peer to determine that the clock
      has skewed too much, and hence, disconnect.  For backwards compatability
      reasons, this is being kludged into a DeliveryStatusMessage (ewww).  The
      next time we have a backwards compatability break, we can put in a proper
      message setup for it.
2004-12-14 16:42:35 +00:00
ad72e5cbdf 2004-12-14 jrandom
* Reenable the probabalistic drop on the TCP queues to deal with good old
      fashioned bandwidth limiting.  However, by default the probability is
      rigged to reserve 0% of the queue free - meaning we just aggressively
      fail messages in the queue if we're transferring too slowly.  That
      reservation factor can be increased with 'tcp.queueFreeFactor=0.25'
      (or whatever) and the drop code can be disabled with the parameter
      'tcp.dropProbabalistically=false'.
    * Still penalize a peer on tunnel failure, but don't immediately drop
      their capacity to 0.
    * More aggressively ACK duplicates
    * Randomize the timestamper period
    * Display the clock skew on the connection logs when a peer sends it.
    * Allow the timestamper to fix skews of up to 10 minutes
    * Logging
2004-12-14 12:14:41 +00:00
b2f183fc17 2004-12-14 jrandom
* Reenable the probabalistic drop on the TCP queues to deal with good old
      fashioned bandwidth limiting.  However, by default the probability is
      rigged to reserve 0% of the queue free - meaning we just aggressively
      fail messages in the queue if we're transferring too slowly.  That
      reservation factor can be increased with 'tcp.queueFreeFactor=0.25'
      (or whatever) and the drop code can be disabled with the parameter
      'tcp.dropProbabalistically=false'.
    * Still penalize a peer on tunnel failure, but don't immediately drop
      their capacity to 0.
    * More aggressively ACK duplicates
    * Randomize the timestamper period
    * Display the clock skew on the connection logs when a peer sends it.
    * Allow the timestamper to fix skews of up to 10 minutes
    * Logging
2004-12-14 11:54:39 +00:00
9e16bc203a 2004-12-13 jrandom
* Added some error checking on the new client send job (thanks duck!)
    * Implemented tunnel rejection based on bandwidth usage (rejecting tunnels
      proportional to the bytes allocated in existing tunnels vs the bytes
      allowed through the bandwidth limiter).
    * Enable a new configuration parameter for triggering a tunnel rebuild
      (tunnel.maxTunnelFailures), where that is the max allowed test failures
      before killing the tunnel (default 0).
    * Gather more data that we rank capacity by (now we monitor and balance the
      data from 10m/30m/60m/1d instead of just 10m/60m/1d).
    * Fix a truncation/type conversion problem on the long term capacity
      values (we were ignoring the daily stats outright)
2004-12-13 13:45:52 +00:00
83c6eac017 added forum.fr.i2p, fedo.i2p, and pastebin.i2p 2004-12-13 08:48:29 +00:00
d5b277a536 test to verify that a closed socket is propogated to the client 2004-12-13 01:18:24 +00:00
77ce6c33e3 2004-12-11 jrandom
* Fix the missing HTTP timeout, which was caused by the deferred syn used
      by default.  This, in turn, meant the I2PSocket creation doesn't fail
      on .connect, but is unable to transfer any data in any direction.  We now
      detect that condition for the I2PTunnelHTTPClient and throw up the right
      error page.
    * Logging
2004-12-11 09:26:23 +00:00
60f8d349cf 2004-12-11 jrandom
* Use a simpler and less memory intensive job for processing outbound
      client messages when the session is in mode=bestEffort.  We can
      immediately discard the data as soon as its sent the first time,
      rather than wait for an ack, since we will never internally resend.
    * Reduce some synchronization to avoid a rare deadlock
    * Replaced 'localhost' with 127.0.0.1 in the i2ptunnel config, and special
      case it within the tunnel controller.
    * Script cleanup for building jbigi/jcpuid
    * Logging
2004-12-11 07:05:12 +00:00
f539c3df70 added frosk.i2p 2004-12-11 05:08:14 +00:00
fe1cf1758c added theland.i2p 2004-12-11 04:16:10 +00:00
8c71c26487 added dox.i2p 2004-12-11 01:22:03 +00:00
2ce39d1fd4 added amiga.i2p 2004-12-10 22:37:29 +00:00
88a994b712 cleaned up paths 2004-12-10 13:12:07 +00:00
24c8cc1a0c reference the new gmp 4.1.4 2004-12-10 10:22:17 +00:00
caf684394c crypto unit test to allow cross-architecture testing of jbigi and the crypto code 2004-12-10 08:48:23 +00:00
4b74510450 added source instructions (thanks bens\!) 2004-12-10 00:34:53 +00:00
mpc
0ddcfc423e *** empty log message *** 2004-12-09 14:05:22 +00:00
b4ac56e204 added frooze.i2p 2004-12-09 08:27:45 +00:00
3b19ac3942 use the standard I2CP port (7654) not jrandom's local I2CP port (duh) 2004-12-09 00:29:29 +00:00
af52cad4ea * 2004-12-08 0.4.2.3 released 2004-12-08 21:08:10 +00:00
d88396c1e2 2004-12-08 jrandom
* Revised the buffering when reading from the SAM client and writing
      to the stream.  Also added a thread (sigh) so we don't block the
      SAM client from giving us more messages for abnormally long periods
      of time.
    * Display the router version in the logs on startup (oft requested)
    * Fix a race during the closing of a messageOutputStream
2004-12-08 17:16:16 +00:00
4c5f7b9451 aliased gott.i2p as jrandom.i2p (ed. note: no, i am not gott) 2004-12-08 07:45:09 +00:00
e601cedbb8 2004-12-06 jrandom
* Don't do a 'passive flush' while there are already outbound messages
      unacked.
    * Show the reseed link if up to 10 peers profiles are active (thanks
      dburton!)
2004-12-07 01:35:53 +00:00
fa12dc867f 2004-12-06 jrandom
* Don't do a 'passive flush' while there are already outbound messages
      unacked.
    * Show the reseed link if up to 10 peers profiles are active (thanks
      dburton!)
2004-12-07 01:09:16 +00:00
acfb6c4578 Added sonax.i2p 2004-12-06 22:13:41 +00:00
e52d637092 2004-12-06 jrandom
* Don't propogate streaming connection failures out to the SAM bridge as
      fatal errors.
    * Dont barf on repeated I2CP closure.
2004-12-06 05:03:57 +00:00
2fba055696 2004-12-05 jrandom
* Explicitly use "127.0.0.1" to bind the I2CP listener, not the JVM's
      getLocalhost call
2004-12-06 02:08:02 +00:00
88bb176f3b 2004-12-05 jrandom
* Default the I2CP listener to localhost only, unless overridden by
      i2cp.tcp.bindAllInterfaces=true (thanks dm!)
    * More SAM fixes for things recently broken (whee)
2004-12-06 00:54:07 +00:00
499eeb275b added 1.fcp.freenet.i2p and copied fcp.i2p to 2.fcp.freenet.i2p 2004-12-05 22:18:57 +00:00
61a8d679bb 2004-12-05 jrandom
* Fix the recently broken SAM bridge (duh)
    * Add a new pair of SAM apps - net.i2p.sam.client.SAMStreamSink and
      net.i2p.sam.client.SAMStreamSend, mirroring the streaming lib's
      StreamSink and StreamSend apps for transferring files.
    * Make the passive flush timer fire more frequently.
2004-12-05 15:32:32 +00:00
2bbde91625 2004-12-05 jrandom
* Fixed some links in the console (thanks ugha!) and the javadoc
      (thanks dinoman!)
    * Fix the stream's passive flush timer (oh, its supposed to work?)
2004-12-05 10:22:57 +00:00
9ce098ee06 added asciiwhite.i2p 2004-12-05 08:47:02 +00:00
927ae57d24 added fcp.i2p 2004-12-05 03:16:30 +00:00
mpc
d65c2d3539 added installation instructions 2004-12-05 01:57:12 +00:00
2d9d8f32dc 2004-12-03 jrandom
* Toss in a small pool of threads (3) to execute the events queued up with
      the SimpleTimer, as we do currently see the occational event
      notification spiking up to a second or so.
    * Implement a SAM client API in java, useful for event based streaming (or
      for testing the SAM bridge)
    * Added support to shut down the SAM bridge on OOM (useful if the SAM
      bridge is being run outside of the router).
    * Include the SAM test code in the sam.jar
    * Remove an irrelevent warning message from SAM, which was caused by
      perfectly normal operation due to a session being closed.
    * Removed some unnecessary synchronization in the streaming lib's
      PacketQueue
    * More quickly clean up the memory used by the streaming lib by
      immediately killing each packet's resend job as soon as it is ACKed (or
      cancelled), so that there are no longer any valid pointers to the
      (potentially 32KB) packet.
    * Fixed the timestamps dumped to stdout when debugging the PacketHandler.
    * Drop packets that would expand our inbound window beyond our maximum
      buffer size (default 32 messages)
    * Always read the ACK/NACK data from the verified packets received, even
      if we are going to drop them
    * Always adjust the window when there are messages ACKed, though do not
      change its size except as before.
    * Streamlined some synchronization in the router's I2CP handling
    * Streamlined some memory allocation in the SAM bridge
    * Default the streaming lib to disconnect on inactivity, rather than send
      an empty message.
2004-12-04 23:43:07 +00:00
1a30cd5f4a 2004-12-03 jrandom
* Toss in a small pool of threads (3) to execute the events queued up with
      the SimpleTimer, as we do currently see the occational event
      notification spiking up to a second or so.
    * Implement a SAM client API in java, useful for event based streaming (or
      for testing the SAM bridge)
    * Added support to shut down the SAM bridge on OOM (useful if the SAM
      bridge is being run outside of the router).
    * Include the SAM test code in the sam.jar
    * Remove an irrelevent warning message from SAM, which was caused by
      perfectly normal operation due to a session being closed.
    * Removed some unnecessary synchronization in the streaming lib's
      PacketQueue
    * More quickly clean up the memory used by the streaming lib by
      immediately killing each packet's resend job as soon as it is ACKed (or
      cancelled), so that there are no longer any valid pointers to the
      (potentially 32KB) packet.
    * Fixed the timestamps dumped to stdout when debugging the PacketHandler.
    * Drop packets that would expand our inbound window beyond our maximum
      buffer size (default 32 messages)
    * Always read the ACK/NACK data from the verified packets received, even
      if we are going to drop them
    * Always adjust the window when there are messages ACKed, though do not
      change its size except as before.
    * Streamlined some synchronization in the router's I2CP handling
    * Streamlined some memory allocation in the SAM bridge
    * Default the streaming lib to disconnect on inactivity, rather than send
      an empty message.
this still doesnt get the BT to where it needs to be, or fix the timeout problem,
but i dont like having so many commits outstanding and these updates are sound
2004-12-04 23:40:50 +00:00
f54687f398 added greenflog.i2p 2004-12-03 20:54:01 +00:00
mpc
9f4b4c5de1 Still trying to get this to compile under VS.NET 2004-12-03 05:59:25 +00:00
mpc
33bfa94229 Added session to naming callback 2004-12-02 23:05:30 +00:00
mpc
61e5f190a6 Updated examples 2004-12-02 22:54:22 +00:00
mpc
a4946272d0 get rid of stdint.h stuff because it confuses the microsoft compiler 2004-12-02 22:16:27 +00:00
8abd99d134 2004-12-01 jrandom
* Fix for a race in the streaming lib as caused by some odd SAM activity
2004-12-02 03:20:03 +00:00
97e8ab7c5b * 2004-12-01 0.4.2.2 released
2004-12-01  jrandom
    * Fixed a stupid typo that inadvertantly allowed persistent HTTP
      connections to work (thanks duck!)
    * Make sure we override the inactivity timeout too
2004-12-02 00:35:17 +00:00
cb930a7ab5 * 2004-12-01 0.4.2.2 released
2004-12-01  jrandom
    * Fixed a stupid typo that inadvertantly allowed persistent HTTP
      connections to work (thanks duck!)
    * Make sure we override the inactivity timeout too
2004-12-02 00:27:27 +00:00
610f1f7dd4 * 2004-12-01 0.4.2.1 released
2004-12-01  jrandom
    * Strip out any of the Accept-* HTTP header lines, and always make sure to
      include the forged User-agent header.
    * Adjust the default read timeout on the eepproxy to 60s, unless
      overridden.
    * Minor tweak on stream shutdown.
2004-12-01 22:31:55 +00:00
516d0b4db8 2004-11-30 jrandom
* Render the burst rate fields on /config.jsp properly (thanks ugha!)
    * Build in a simple timeout to flush data queued into the I2PSocket but
      not yet flushed.
    * Don't explicitly flush after each SAM stream write, but leave it up to
      the [nonblocking] passive flush.
    * Don't whine about 10-99 connection events occurring in a second
    * Don't wait for completion of packets that will not be ACKed (duh)
    * Adjust the congestion window, even if the packet was resent (duh)
    * Make sure to wake up any blocking read()'s when the MessageInputStream
      is close()ed (duh)
    * Never wait more than the disconnect timeout for a write to complete
2004-11-30 23:41:51 +00:00
df61ae5c6f duh. thanks clayboy :) 2004-11-30 00:10:20 +00:00
9f6584b55e 2004-11-29 jrandom
* Minor fixes to avoid unnecessary errors on shutdown (thanks susi!)
2004-11-29 23:24:49 +00:00
e4b41f5bb0 2004-11-29 jrandom
* Reduced contention for local client delivery
    * Drop the new code that munges the wrapper.config.  Instead, updates that
      need to change it will include their own wrapper.config in the
      i2pupdate.zip, overwriting the existing file.  If the file
      "wrapper.config.updated" is included, it is deleted at first opportunity
      and the router shut down, displaying a notice that the router must be
      started again cleanly to allow the changes to the wrapper.config to take
      effect.
    * Properly stop accept()ing I2PSocket connections if we close down the
      session (duh).
    * Make sure we cancel any outstanding Packets in flight when a connection
      is terminated (thanks susi!)
    * Split up the I2PTunnel closing a little further.
2004-11-29 22:27:39 +00:00
8d0cea93e9 2004-11-29 jrandom
* Reduced contention for local client delivery
    * Drop the new code that munges the wrapper.config.  Instead, updates that
      need to change it will include their own wrapper.config in the
      i2pupdate.zip, overwriting the existing file.  If the file
      "wrapper.config.updated" is included, it is deleted at first opportunity
      and the router shut down, displaying a notice that the router must be
      started again cleanly to allow the changes to the wrapper.config to take
      effect.
    * Properly stop accept()ing I2PSocket connections if we close down the
      session (duh).
    * Make sure we cancel any outstanding Packets in flight when a connection
      is terminated (thanks susi!)
    * Split up the I2PTunnel closing a little further.
2004-11-29 21:57:14 +00:00
d294d07919 added bdl.i2p 2004-11-29 20:55:38 +00:00
153eea2bd5 you mean i'm supposed to *test* it? 2004-11-29 03:35:39 +00:00
571e3c5c13 2004-11-28 jrandom
* Accept IP address detection changes with a 2-out-of-3 minimum.
    * As long as the router is up, keep retrying to bind the I2CP listener.
    * Decrease the java service wrapper ping frequency to once every 10
      minutes, rather than once every 5 seconds.
2004-11-29 02:09:27 +00:00
a2d268f3d6 2004-11-28 jrandom
* Accept IP address detection changes with a 2-out-of-3 minimum.
    * As long as the router is up, keep retrying to bind the I2CP listener.
    * Decrease the java service wrapper ping frequency to once every 10
      minutes, rather than once every 5 seconds.
2004-11-29 01:58:38 +00:00
02d456d7a0 added bacardi.i2p and guttersnipe.i2p 2004-11-28 22:47:01 +00:00
mpc
b3626ad86f should've tested it first 2004-11-28 05:11:39 +00:00
mpc
9b6eab451f partial raw handling 2004-11-28 05:10:29 +00:00
72be9b5f04 2004-11-27 jrandom
* Some cleanup and bugfixes for the IP address detection code where we
      only consider connections that have actually sent and received messages
      recently as active, rather than the mere presence of a TCP socket as
      activity.
2004-11-27 21:02:06 +00:00
35e94a7f65 added evil.i2p 2004-11-27 06:56:29 +00:00
8e02586cc9 2004-11-27 jrandom
* Removed the I2PTunnel inactivity timeout thread, since the new streaming
      lib can do that (without an additional per-connection thread).
    * Close the I2PTunnel forwarder threads more aggressively
2004-11-27 05:17:06 +00:00
0b5a640896 2004-11-27 jrandom
* Fix for a fast loop caused by a race in the new streaming library (thanks
      DrWoo, frontier, pwk_, and thetower!)
    * Minor updates to the SimpleTimer and Connection to help track down a
      high CPU usage problem (dumping debug info to stdout/wrapper.log if too
      many events/tasks fire in a second)
    * Minor fixes for races on client disconnects (causing NPEs)
2004-11-27 03:54:17 +00:00
64b5089909 * 2004-11-26 0.4.2 released
2004-11-26  jrandom
    * Enable the new streaming lib as the default.  That means, for any
      substantial definition, it is NOT BACKWARDS COMPATIBLE.
2004-11-26 15:20:14 +00:00
e0e09bfa45 (please wait until the release announcement before updating)
* 2004-11-26  0.4.2 released
2004-11-26  jrandom
    * Enable the new streaming lib as the default.  That means, for any
      substantial definition, it is NOT BACKWARDS COMPATIBLE.
2004-11-26 15:15:16 +00:00
8c7f9f2c65 added bdsm.i2p 2004-11-26 02:12:16 +00:00
aff5cea949 2004-11-25 jrandom
* Revised the installer to include start menu and desktop shortcuts for
      windows platforms, including pretty icons (thanks DrWoo!)
    * Allow clients specified in clients.config to have an explicit startup
      delay.
    * Update the default install to launch a browser pointing at the console
      whenever I2P starts up, rather than only the first time it starts up
      (configurable on /configservice.jsp, or in clients.config)
    * Bugfix to the clock skew checking code to monitor the delta between
      offsets, not the offset itself (duh)
    * Router console html update
    * New (and uuuuugly) code to verify that the wrapper.config contains
      the necessary classpath entries on update.  If it has to update the
      wrapper.config, it will stop the JVM and service completely, since the
      java service wrapper doesn't reread the wrapper.config on JVM restart -
      requiring the user to manually restart the service after an update.
    * Increase the TCP connection timeout to 30s (which is obscenely long)
2004-11-25 22:17:37 +00:00
8bd99f699f 2004-11-25 jrandom
* Revised the installer to include start menu and desktop shortcuts for
      windows platforms, including pretty icons (thanks DrWoo!)
    * Allow clients specified in clients.config to have an explicit startup
      delay.
    * Update the default install to launch a browser pointing at the console
      whenever I2P starts up, rather than only the first time it starts up
      (configurable on /configservice.jsp, or in clients.config)
    * Bugfix to the clock skew checking code to monitor the delta between
      offsets, not the offset itself (duh)
    * Router console html update
    * New (and uuuuugly) code to verify that the wrapper.config contains
      the necessary classpath entries on update.  If it has to update the
      wrapper.config, it will stop the JVM and service completely, since the
      java service wrapper doesn't reread the wrapper.config on JVM restart -
      requiring the user to manually restart the service after an update.
    * Increase the TCP connection timeout to 30s (which is obscenely long)
------------------------------------------------
2004-11-25 21:57:19 +00:00
b0513fff8a javadoc 2004-11-25 21:49:29 +00:00
f10db9d91a added eschaton.i2p 2004-11-23 03:58:46 +00:00
9b5fb17068 added blog.curiosity.i2p 2004-11-23 02:46:07 +00:00
608d713dca 2004-11-22 jrandom
* Update to the SAM bridge to reduce some unnecessary memory allocation.
    * New stat to keep track of slow jobs (ones that take more than a second
      to excute).  This is published in the netDb as jobQueue.jobRunSlow
2004-11-23 01:12:34 +00:00
6d5fc8ca21 2004-11-21 jrandom
* Update the I2PTunnel web interface to include an option for the new
      streaming lib (which is ignored until the 0.4.2 release).
    * Revised the I2PTunnel web interface to keep the I2CP options of client
      and httpclient tunnels in sync, as they all share the same I2CP session.
2004-11-22 17:57:16 +00:00
8c3145b70f 2004-11-21 jrandom
* Only allow small clock skews after the first 10 minutes of operation
      (to prevent later network lag bouncing us way off course - yes, we
      really need an NTP impl to balance out the network burps...)
    * Revamp the I2PTunnel web interface startup process so that everything
      is shown immediately, so that different pieces hanging don't hang
      the rest, and other minor bugfixes.
    * Take note of SAM startup error (in case you're already running a SAM
      bridge...)
    * Increase the bandwidth limiter burst values available to 10-60s (or
      whatever is placed in /configadvanced.jsp, of course)
2004-11-21 23:23:42 +00:00
12a6f3e938 2004-11-21 jrandom
* Only allow small clock skews after the first 10 minutes of operation
      (to prevent later network lag bouncing us way off course - yes, we
      really need an NTP impl to balance out the network burps...)
    * Revamp the I2PTunnel web interface startup process so that everything
      is shown immediately, so that different pieces hanging don't hang
      the rest, and other minor bugfixes.
    * Take note of SAM startup error (in case you're already running a SAM
      bridge...)
    * Increase the bandwidth limiter burst values available to 10-60s (or
      whatever is placed in /configadvanced.jsp, of course)
2004-11-21 22:31:33 +00:00
2c59435762 2004-11-21 jrandom
* Allow end of line comments in the hosts.txt and other config files,
      using '#' to begin the comments (thanks susi!)
    * Add support to I2PTunnel's 'client' feature for picking between multiple
      target destinations (e.g. 'client 6668 irc.duck.i2p,irc.baffled.i2p')
    * Add a quick link on the left hand nav to reseed if there aren't enough
      known peers, as well as link to the config page if there are no active
      peers.  Revised config page accordingly.
2004-11-21 19:42:57 +00:00
7336bf5c55 oops, forgot to commit this one. 2004-11-21 06:13:30 +00:00
2b21b97277 * make sure we send the close message even if the I2PSocket is closed directly (rather than the outputStream)
* only fail the session tags as a last resort for the last resend, rather than on every resend after the 1st
2004-11-21 04:11:35 +00:00
603bc99a2f 2004-11-21 jrandom
* Destroy ElGamal/AES+SessionTag keys after 15 minutes of inactivity
      rather that every 15 minutes, and increase the warning period in which
      we refresh tags from 30s to 2 minutes.
    * Bugfix for a rare problem closing an I2PTunnel stream where we'd fail
      to close the I2PSocket (leaving it to timeout).
2004-11-21 04:08:13 +00:00
426ede1c99 * handle con b0rkage more gracefully
* close the session on socket manager destroy (the old lib does, and SAM wants it to)
2004-11-20 15:52:08 +00:00
21506c1d1b handle poorly formatted packets more gracefully (in case someone uses the wrong streaming lib *cough*) 2004-11-20 02:40:57 +00:00
0b48b18e7e 2004-11-19 jrandom
* Off-by-one fix to the tunnel pool management code, along side some
      explicit initialization.  This can affect clients whose lengths are
      shorter than the router's default (thanks duck!)
2004-11-19 23:04:27 +00:00
mpc
c8f6d9c7a1 improved logging 2004-11-19 02:17:20 +00:00
ed8eced9dd * stats
if you want to get this data and you're running outside the router, just toss on a
"-Dstat.logFilters=* -Dstat.logFile=proxy.stats" to the java command line.  the resulting
file can be parsed w/ java -cp lib/i2p.jar net.i2p.stat.StatLogSplitter proxy.stats, then fed
into gnuplot or whatever
2004-11-19 00:00:05 +00:00
4a029b7853 * if we timeout connecting or otherwise need to cancel tags that we've sent, go one step
further and cancel all of the tags we're using for that peer so that we can react to their
  potential restart / tag loss quicker.
* use the minimum resend delay as the base to be exponentiated if our RTT is too low
  (so we resend less)
* dont be such a wuss when flushing a closed stream
2004-11-18 19:42:11 +00:00
6bd9e58ece * fix a reordering bug that can trim the end of a stream under heavy lag (thanks duck!)
* fix a pair of races on router crash
2004-11-18 13:47:27 +00:00
cd075fc8a6 2004-11-17 jrandom
* Fix to propogate i2psocket options into the SAM bridge correctly (thanks
      Ragnarok!)
2004-11-17 19:42:53 +00:00
e733427920 2004-11-17 jrandom
* Minor logging update.
2004-11-17 18:34:25 +00:00
107da0ae22 i suppose i should commit this, 'eh? (thanks mule) 2004-11-17 03:43:04 +00:00
mpc
3629d7a32c *** empty log message *** 2004-11-17 03:42:00 +00:00
71e1152cde if we've already sent our close packet but we still want to send something, send an ack packet 2004-11-17 00:57:33 +00:00
d01ab7fd23 *cough* (lets not have everyone think they're the resend with a packet in the air...) 2004-11-16 22:47:16 +00:00
f46d0a720c * fix up the propogation of client options to the streaming lib
* add new back-off logic to reduce payload resends during transient
  lag - only let one packet be resent at a time, even if the window size
  allows it (and the packet timers request it).  this should make
  congestion less painful, and reduce the overall number of messages
  resent (as the SACKs for the one packet actively resent should clarify
  what made it through)
2004-11-16 22:15:16 +00:00
d943b4993a 2004-11-16 jrandom
* Clean up the propogation of i2psocket options so that various streaming
      libs can honor them more precisely
2004-11-16 22:11:11 +00:00
4a4f57d6ac 2004-11-16 jrandom
* Minor logging update
(toss net.i2p.router.JobQueueRunner=WARN in /configlogging.jsp to see wtf is hanging your router)
2004-11-16 13:45:40 +00:00
085da16268 run multiple connections 2004-11-15 14:40:08 +00:00
306f6b0037 various tweaks to make sure we release appropriate references ASAP 2004-11-15 14:37:56 +00:00
3780d290fa 2004-11-14 jrandom
* Fix a long standing leak in I2PTunnel (hanging on to i2psocket objects)
    * Fix a leak injected into the SimpleTimer
    * Fix a race condition in the tunnel message handling
2004-11-15 14:35:16 +00:00
ad7dc66f90 2004-11-13 jrandom
* Added throttles on how many I2PTunnel client connections we open at once
    * Replaced some buffered streams in I2PTunnel with unbuffered streams, as
      the streaming library used should take care of any buffering.
    * Added a cache for some objects used in I2PTunnel, especially useful when
      there are many short lived connections.
    * Trimmed the SimpleTimer's processing a bit
2004-11-13 09:59:37 +00:00
258244fed8 * ack packets with a payload, even if they have ID=0 (duh)
* properly implement the connection timeout
* make sure we clear the outbound packets on close
* don't b0rk on repeated close() calls
2004-11-13 09:49:31 +00:00
5f7982540f 2004-11-13 jrandom
* Added throttles on how many I2PTunnel client connections we open at once
    * Replaced some buffered streams in I2PTunnel with unbuffered streams, as
      the streaming library used should take care of any buffering.
    * Added a cache for some objects used in I2PTunnel, especially useful when
      there are many short lived connections.
    * Trimmed the SimpleTimer's processing a bit
2004-11-13 09:43:35 +00:00
b1c0de4b77 (oops forgot to commit before passing out) 2004-11-12 06:07:33 +00:00
45b3fecfff allow throttles on the number of streams participated in, as well as how a timeout + queue for overflow 2004-11-11 21:19:59 +00:00
9774ded4dd added slacker.i2p 2004-11-11 10:00:32 +00:00
7ec027854e update connection options specification
tweak around with the connection delay and timeout activity to test delayed vs. immediate syn
2004-11-11 09:41:25 +00:00
b457001b42 timeout gracefully even if the socket is stopped in odd places 2004-11-11 09:37:46 +00:00
6fc6866eb4 *cough* 2004-11-10 13:28:39 +00:00
299e5528bc * deal with nondeferred connections (block the mgr.connect(..) until either success or failure)
* allow loading the connection options from the env (or another Properties specified)
2004-11-10 12:55:06 +00:00
881524a5e4 2004-11-10 jrandom
* Allow loading the (mini)streaming connection options from the
      environment.
    * More defensive programming in the DSA implementation.
2004-11-10 12:33:01 +00:00
ffc405138d added pdforge.i2p and ses.i2p 2004-11-10 09:43:45 +00:00
f6ff74af16 oops, properly choke the congestion detection for resend 2004-11-09 13:33:11 +00:00
73a12d47de * you mean we should implement congestion *avoidance* too?
* ack properly on duplicates
* set the message input stream buffer large enough to fit the max window (duh)
2004-11-09 13:26:10 +00:00
83165df7e5 * delay the ack of a syn
* make sure we ack duplicate messages received (if we aren't already doing so)
* implement a choke on the local buffer, in case we receive data faster than its
  removed from the i2psocket's MessageInputStream (handle via packet drop and
  explicit congestion notification)
2004-11-09 11:00:04 +00:00
30074be5a5 logging 2004-11-09 05:54:39 +00:00
16715aa309 * synchronize around the buffer used in the packet queue (duh)
* dont increment # unacked packets artificially
* dont try to push data after closing
* cleanup the packet serialization
* logging
2004-11-08 21:40:25 +00:00
53f3802a81 dont keepalive if we're in the closing process (duh) 2004-11-08 16:49:23 +00:00
07626b5cc2 two new tests - inactivity (seeing how we stay alive) and timeout (contacting someone who doesnt exist) 2004-11-08 15:27:41 +00:00
9ea603caf2 * hang around for 5m (er, 2.5msl, i suppose) after connection closure no matter what, so we
can respond apropriately
* optional inactivty timer with three possible results:
  disconnect, send a (blank) message (to be ACKed), or do nothing.
2004-11-08 15:05:13 +00:00
18ab9b80d2 testStaggered - read what we can while writing randomly 2004-11-08 05:48:36 +00:00
71c1cb4e12 * min resend delay = 20s
* rework the messageInputStream to implement read(byte[], off, len), and fix some fencepost
  bugs in the byte retrieval
2004-11-08 05:42:57 +00:00
0c049f39d9 2004-11-08 jrandom
* Remove spurious flush calls from I2PTunnel, and work with the
      I2PSocket's output stream directly (as the various implementations
      do their own buffering).
    * Another pass at a long standing JobQueue bug - dramatically simplify
      the job management synchronization since we dont need to deal with
      high contention (unlike last year when we had dozens of queue runners
      going at once).
    * Logging
2004-11-08 05:40:20 +00:00
096b807c37 2004-11-08 jrandom
* Make the SAM bridge more resiliant to bad handshakes (thanks duck!)
2004-11-08 03:18:01 +00:00
mpc
9018af4765 Leave it up to the client application whether to poll or not 2004-11-07 05:06:36 +00:00
mpc
323f28e306 Now it works?? 2004-11-07 04:14:53 +00:00
mpc
e9dbd00f42 *** empty log message *** 2004-11-07 04:12:36 +00:00
1385 changed files with 154631 additions and 35414 deletions

101
Makefile.gcj Normal file
View File

@ -0,0 +1,101 @@
# Makefile for building native I2P binaries and libraries with GCJ
#
# WARNING: Do not use this yet, as it may explode (etc).
#
GCJ=gcj #/usr/local/gcc-4.0.2/bin/gcj
EXTRA_LD_PATH= #/usr/local/gcc-4.0.2/lib
ANT=ant #/opt/apache-ant-1.6.5/bin/ant
ANT_TARGET=buildclean
NATIVE_DIR=native
##
# Define what jar files get into libi2p.so. The current setup is
# *incredibly* lazy, throwing everything in the .so, rather than
# give each .jar file its own .so.
# i2p.jar: base SDK
# mstreaming.jar: streaming API
# streaming.jar: full streaming lib implementation
# i2ptunnel.jar: I2PTunnel proxy
# sam.jar: SAM bridge and API
# i2psnark.jar: bittorrent client
# router.jar: full I2P router
# jbigi.jar: collection of native optimized GMP routines for crypto
JAR_BASE=i2p.jar mstreaming.jar streaming.jar
JAR_CLIENTS=i2ptunnel.jar sam.jar
JAR_ROUTER=router.jar
JAR_JBIGI=jbigi.jar
JAR_XML=xml-apis.jar resolver.jar xercesImpl.jar
JAR_CONSOLE=\
i2psnark.jar \
javax.servlet.jar \
commons-el.jar \
commons-logging.jar \
jasper-runtime.jar \
ant-apache-bcel.jar \
ant.jar \
jasper-compiler.jar \
org.mortbay.jetty.jar \
routerconsole.jar
JAR_SUCKER=jdom.jar rome-0.7.jar sucker.jar
LIBI2P_JARS=${JAR_BASE} ${JAR_CLIENTS} ${JAR_ROUTER} ${JAR_JBIGI}
# unfortunately, its not quite ready for most end users, as the
# ${JAR_CONSOLE} fails to compile with:
# org/apache/commons/logging/impl/LogKitLogger.java: In class 'org.apache.commons.logging.impl.LogKitLogger':
# .../LogKitLogger.java: In constructor '(java.lang.String)':
# .../LogKitLogger.java:91: error: cannot find file for class org.apache.log.Hierarchy
# .../LogKitLogger.java:91: error: cannot find file for class org.apache.log.Hierarchy
# .../LogKitLogger.java:104: error: cannot find file for class org.apache.log.Hierarchy
# .../LogKitLogger.java:104: confused by earlier errors, bailing out
#${JAR_CONSOLE}\
#${JAR_XML} \
#${JAR_SUCKER}
#${JAR_CONSOLE}
SYSTEM_PROPS=-DloggerFilenameOverride=logs/log-router-@.txt \
-Dorg.mortbay.http.Version.paranoid=true \
-Dorg.mortbay.util.FileResource.checkAliases=false \
-Dorg.mortbay.xml.XmlParser.NotValidating=true
#SYSTEM_PROPS=-Di2p.weakPRNG=true
OPTIMIZE=-O2
#OPTIMIZE=-O3
LD_LIBRARY_PATH=${EXTRA_LD_PATH}:.
all: jars native
@echo "* Build complete"
jars:
@${ANT} ${ANT_TARGET}
clean: native_clean
native: native_clean native_shared
@echo "* Native code build in ${NATIVE}"
native_clean:
@rm -rf ${NATIVE_DIR}
@mkdir ${NATIVE_DIR}
native_shared: libi2p.so
@cd build ; ${GCJ} ${OPTIMIZE} -fjni -L../${NATIVE_DIR} -li2p ${SYSTEM_PROPS} -o ../${NATIVE_DIR}/i2p_dsa --main=net.i2p.crypto.DSAEngine
@echo "* i2p_dsa is a simple test app with the DSA engine and Fortuna PRNG to make sure crypto is working"
@cd build ; ${GCJ} ${OPTIMIZE} -fjni -L../${NATIVE_DIR} -li2p ${SYSTEM_PROPS} -o ../${NATIVE_DIR}/prng --main=gnu.crypto.prng.FortunaStandalone
@cd build ; ${GCJ} ${OPTIMIZE} -fjni -L../${NATIVE_DIR} -li2p ${SYSTEM_PROPS} -o ../${NATIVE_DIR}/i2ptunnel --main=net.i2p.i2ptunnel.I2PTunnel
@echo "* i2ptunnel is mihi's I2PTunnel CLI"
@echo " run it as ./i2ptunnel -cli to avoid awt complaints"
@cd build ; ${GCJ} ${OPTIMIZE} -fjni -L../${NATIVE_DIR} -li2p ${SYSTEM_PROPS} -o ../${NATIVE_DIR}/i2ptunnelctl --main=net.i2p.i2ptunnel.TunnelControllerGroup
@echo "* i2ptunnelctl is a controller for I2PTunnel, reading i2ptunnel.config"
@echo " and launching the appropriate proxies"
#@cd build ; ${GCJ} ${OPTIMIZE} -fjni -L../${NATIVE_DIR} -li2p ${SYSTEM_PROPS} -o ../${NATIVE_DIR}/i2psnark --main=org.klomp.snark.Snark
#@echo "* i2psnark is an anonymous bittorrent client"
@cd build ; ${GCJ} ${OPTIMIZE} -fjni -L../${NATIVE_DIR} -li2p ${SYSTEM_PROPS} -o ../${NATIVE_DIR}/i2prouter --main=net.i2p.router.Router
@echo "* i2prouter is the main I2P router"
@echo " it can be used, and while the router console won't load,"
@echo " i2ptunnel will, so it will start all the proxies defined in i2ptunnel.config"
libi2p.so:
@echo "* Building libi2p.so"
@(cd build ; time ${GCJ} ${OPTIMIZE} -fPIC -fjni -shared -o ../${NATIVE_DIR}/libi2p.so ${LIBI2P_JARS} ; cd .. )
@ls -l ${NATIVE_DIR}/libi2p.so
@echo "* libi2p.so built"

View File

@ -0,0 +1,43 @@
addressbook v2.0.2 - A simple name resolution mechanism for I2P
addressbook is a simple implementation of subscribable address books for I2P.
Addresses are stored in userhosts.txt and a second copy of the address book is
placed on your eepsite as hosts.txt.
subscriptions.txt contains a list of urls to check for new addresses.
Since the urls are checked in order, and conflicting addresses are not added,
addressbook.subscriptions can be considered to be ranked in order of trust.
The system created by addressbook is similar to the early days of DNS,
when everyone ran a local name server. The major difference is the lack of
authority. Name cannot be guaranteed to be globally unique, but in practise
they probably will be, for a variety of social reasons.
Requirements
************
i2p with a running http proxy
Installation and Usage
**********************
1. Unzip addressbook-%ver.zip into your i2p directory.
2. Restart your router.
The addressbook daemon will automatically run while the router is up.
Aside from the daemon itself, the other elements of the addressbook interface
are the config.txt, myhosts.txt, and subscriptions.txt files found in the addressbook
directory.
config.txt is the configuration file for addressbook.
myhosts.txt is the addressbook master address book. Addresses placed in this file
take precidence over those in the router address book and in remote address books.
If changes are made to this file, they will be reflected in the router address book
and published address book after the next update. Do not make changes directly to the
router address book, as they could be lost during an update.
subscriptions.txt is the subscription list for addressbook. Each entry is an absolute
url to a file in hosts.txt format. Since the list is checked in order, url's should be
listed in order of trust.

View File

@ -0,0 +1,51 @@
<?xml version="1.0"?>
<project name="addressbook" default="war" basedir=".">
<property name="src" value="java/src/addressbook"/>
<property name="build" value="build"/>
<property name="dist" location="dist"/>
<property name="jar" value="addressbook.jar"/>
<property name="war" value="addressbook.war"/>
<target name="init">
<mkdir dir="${build}"/>
<mkdir dir="${dist}"/>
</target>
<target name="clean">
<delete dir="${build}"/>
<delete dir="${dist}"/>
</target>
<target name="distclean" depends="clean" />
<target name="compile" depends="init">
<javac debug="true" deprecation="on" source="1.3" target="1.3"
srcdir="${src}" destdir="${build}">
<classpath>
<pathelement location="../../core/java/build/i2p.jar" />
<pathelement location="../jetty/jettylib/javax.servlet.jar" />
</classpath>
</javac>
</target>
<target name="jar" depends="compile">
<jar basedir="${build}" destfile="${dist}/${jar}">
<manifest>
<attribute name="Main-Class" value="addressbook.Daemon"/>
</manifest>
</jar>
</target>
<target name="war" depends="compile">
<mkdir dir="${dist}/tmp"/>
<mkdir dir="${dist}/tmp/WEB-INF"/>
<mkdir dir="${dist}/tmp/WEB-INF/classes"/>
<copy todir="${dist}/tmp/WEB-INF/classes">
<fileset dir="${build}"/>
</copy>
<war basedir="${dist}/tmp" webxml="web.xml" destfile="${dist}/${war}"/>
<delete dir="${dist}/tmp"/>
</target>
</project>

View File

@ -0,0 +1,48 @@
# This is the configuration file for addressbook.
#
# Options
# *******
# All paths are realitive to i2p/addressbook. Default value for
# each option is given in parentheses.
#
# proxy_host The hostname of your I2P http proxy.
# (localhost)
#
# proxy_port The port of your I2P http proxy. (4444)
#
# master_addressbook The path to your master address book, used for local
# changes only. (myhosts.txt)
#
# router_addressbook The path to the address book used by the router.
# Contains the addresses from your master address book
# and your subscribed address books. (../userhosts.txt)
#
# private_addressbook The path to the private address book used by the router.
# This is used only by the router and SusiDNS.
# It is not published by addressbook. (../privatehosts.txt)
#
# published_addressbook The path to the copy of your address book made
# available on i2p. (../eepsite/docroot/hosts.txt)
#
# log The path to your addressbook log. (log.txt)
#
# subscriptions The path to your subscription file. (subscriptions.txt)
#
# etags The path to the etags header storage file. (etags)
#
# last_modified The path to the last-modified header storage file.
# (last_modified)
#
# update_delay The time (in hours) between each update. (1)
proxy_host=localhost
proxy_port=4444
master_addressbook=myhosts.txt
router_addressbook=../userhosts.txt
private_addressbook=../privatehosts.txt
published_addressbook=../eepsite/docroot/hosts.txt
log=log.txt
subscriptions=subscriptions.txt
etags=etags
last_modified=last_modified
update_delay=1

View File

@ -0,0 +1,254 @@
/*
* Copyright (c) 2004 Ragnarok
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
* IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
package addressbook;
import java.io.File;
import java.io.IOException;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import net.i2p.I2PAppContext;
import net.i2p.util.EepGet;
/**
* An address book for storing human readable names mapped to base64 i2p
* destinations. AddressBooks can be created from local and remote files, merged
* together, and written out to local files.
*
* @author Ragnarok
*
*/
public class AddressBook {
private String location;
private Map addresses;
private boolean modified;
/**
* Construct an AddressBook from the contents of the Map addresses.
*
* @param addresses
* A Map containing human readable addresses as keys, mapped to
* base64 i2p destinations.
*/
public AddressBook(Map addresses) {
this.addresses = addresses;
}
/**
* Construct an AddressBook from the contents of the file at url. If the
* remote file cannot be read, construct an empty AddressBook
*
* @param url
* A URL pointing at a file with lines in the format "key=value",
* where key is a human readable name, and value is a base64 i2p
* destination.
*/
/* unused
public AddressBook(String url, String proxyHost, int proxyPort) {
this.location = url;
EepGet get = new EepGet(I2PAppContext.getGlobalContext(), true,
proxyHost, proxyPort, 0, "addressbook.tmp", url, true,
null);
get.fetch();
try {
this.addresses = ConfigParser.parse(new File("addressbook.tmp"));
} catch (IOException exp) {
this.addresses = new HashMap();
}
new File("addressbook.tmp").delete();
}
*/
/**
* Construct an AddressBook from the Subscription subscription. If the
* address book at subscription has not changed since the last time it was
* read or cannot be read, return an empty AddressBook.
* Set a maximum size of the remote book to make it a little harder for a malicious book-sender.
*
* @param subscription
* A Subscription instance pointing at a remote address book.
*/
static final long MAX_SUB_SIZE = 3 * 1024 * 1024l; //about 5,000 hosts
public AddressBook(Subscription subscription, String proxyHost, int proxyPort) {
this.location = subscription.getLocation();
EepGet get = new EepGet(I2PAppContext.getGlobalContext(), true,
proxyHost, proxyPort, 0, -1l, MAX_SUB_SIZE, "addressbook.tmp", null,
subscription.getLocation(), true, subscription.getEtag(), subscription.getLastModified(), null);
if (get.fetch()) {
subscription.setEtag(get.getETag());
subscription.setLastModified(get.getLastModified());
}
try {
this.addresses = ConfigParser.parse(new File("addressbook.tmp"));
} catch (IOException exp) {
this.addresses = new HashMap();
}
new File("addressbook.tmp").delete();
}
/**
* Construct an AddressBook from the contents of the file at file. If the
* file cannot be read, construct an empty AddressBook
*
* @param file
* A File pointing at a file with lines in the format
* "key=value", where key is a human readable name, and value is
* a base64 i2p destination.
*/
public AddressBook(File file) {
this.location = file.toString();
try {
this.addresses = ConfigParser.parse(file);
} catch (IOException exp) {
this.addresses = new HashMap();
}
}
/**
* Return a Map containing the addresses in the AddressBook.
*
* @return A Map containing the addresses in the AddressBook, where the key
* is a human readable name, and the value is a base64 i2p
* destination.
*/
public Map getAddresses() {
return this.addresses;
}
/**
* Return the location of the file this AddressBook was constructed from.
*
* @return A String representing either an abstract path, or a url,
* depending on how the instance was constructed.
*/
public String getLocation() {
return this.location;
}
/**
* Return a string representation of the contents of the AddressBook.
*
* @return A String representing the contents of the AddressBook.
*/
public String toString() {
return this.addresses.toString();
}
/**
* Do basic validation of the hostname and dest
* hostname was already converted to lower case by ConfigParser.parse()
*/
private static boolean valid(String host, String dest) {
return
host.endsWith(".i2p") &&
host.length() > 4 &&
host.length() <= 67 && // 63 + ".i2p"
(! host.startsWith(".")) &&
(! host.startsWith("-")) &&
host.indexOf(".-") < 0 &&
host.indexOf("-.") < 0 &&
host.indexOf("..") < 0 &&
// IDN - basic check, not complete validation
(host.indexOf("--") < 0 || host.startsWith("xn--") || host.indexOf(".xn--") > 0) &&
host.replaceAll("[a-z0-9.-]", "").length() == 0 &&
// some reserved names that may be used for local configuration someday
(! host.equals("proxy.i2p")) &&
(! host.equals("router.i2p")) &&
(! host.equals("console.i2p")) &&
(! host.endsWith(".proxy.i2p")) &&
(! host.endsWith(".router.i2p")) &&
(! host.endsWith(".console.i2p")) &&
dest.length() == 516 &&
dest.endsWith("AAAA") &&
dest.replaceAll("[a-zA-Z0-9~-]", "").length() == 0
;
}
/**
* Merge this AddressBook with AddressBook other, writing messages about new
* addresses or conflicts to log. Addresses in AddressBook other that are
* not in this AddressBook are added to this AddressBook. In case of a
* conflict, addresses in this AddressBook take precedence
*
* @param other
* An AddressBook to merge with.
* @param log
* The log to write messages about new addresses or conflicts to.
*/
public void merge(AddressBook other, boolean overwrite, Log log) {
Iterator otherIter = other.addresses.keySet().iterator();
while (otherIter.hasNext()) {
String otherKey = (String) otherIter.next();
String otherValue = (String) other.addresses.get(otherKey);
if (valid(otherKey, otherValue)) {
if (this.addresses.containsKey(otherKey) && !overwrite) {
if (!this.addresses.get(otherKey).equals(otherValue)
&& log != null) {
log.append("Conflict for " + otherKey + " from "
+ other.location
+ ". Destination in remote address book is "
+ otherValue);
}
} else if (!this.addresses.containsKey(otherKey)
|| !this.addresses.get(otherKey).equals(otherValue)) {
this.addresses.put(otherKey, otherValue);
this.modified = true;
if (log != null) {
log.append("New address " + otherKey
+ " added to address book. From: " + other.location);
}
}
}
}
}
/**
* Write the contents of this AddressBook out to the File file. If the file
* cannot be writen to, this method will silently fail.
*
* @param file
* The file to write the contents of this AddressBook too.
*/
public void write(File file) {
if (this.modified) {
try {
ConfigParser.write(this.addresses, file);
} catch (IOException exp) {
}
}
}
/**
* Write this AddressBook out to the file it was read from. Requires that
* AddressBook was constructed from a file on the local filesystem. If the
* file cannot be writen to, this method will silently fail.
*/
public void write() {
this.write(new File(this.location));
}
}

View File

@ -0,0 +1,312 @@
/*
* Copyright (c) 2004 Ragnarok
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
* IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
package addressbook;
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileWriter;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.StringReader;
import java.util.HashMap;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
/**
* Utility class providing methods to parse and write files in config file
* format, and subscription file format.
*
* @author Ragnarok
*/
public class ConfigParser {
/**
* Strip the comments from a String. Lines that begin with '#' and ';' are
* considered comments, as well as any part of a line after a '#'.
*
* @param inputLine
* A String to strip comments from.
* @return A String without comments, but otherwise identical to inputLine.
*/
public static String stripComments(String inputLine) {
if (inputLine.startsWith(";")) {
return "";
}
if (inputLine.split("#").length > 0) {
return inputLine.split("#")[0];
} else {
return "";
}
}
/**
* Return a Map using the contents of BufferedReader input. input must have
* a single key, value pair on each line, in the format: key=value. Lines
* starting with '#' or ';' are considered comments, and ignored. Lines that
* are obviously not in the format key=value are also ignored.
* The key is converted to lower case.
*
* @param input
* A BufferedReader with lines in key=value format to parse into
* a Map.
* @return A Map containing the key, value pairs from input.
* @throws IOException
* if the BufferedReader cannot be read.
*
*/
public static Map parse(BufferedReader input) throws IOException {
Map result = new HashMap();
String inputLine;
inputLine = input.readLine();
while (inputLine != null) {
inputLine = ConfigParser.stripComments(inputLine);
String[] splitLine = inputLine.split("=");
if (splitLine.length == 2) {
result.put(splitLine[0].trim().toLowerCase(), splitLine[1].trim());
}
inputLine = input.readLine();
}
input.close();
return result;
}
/**
* Return a Map using the contents of the File file. See parseBufferedReader
* for details of the input format.
*
* @param file
* A File to parse.
* @return A Map containing the key, value pairs from file.
* @throws IOException
* if file cannot be read.
*/
public static Map parse(File file) throws IOException {
FileInputStream fileStream = new FileInputStream(file);
BufferedReader input = new BufferedReader(new InputStreamReader(
fileStream));
return ConfigParser.parse(input);
}
/**
* Return a Map using the contents of the String string. See
* parseBufferedReader for details of the input format.
*
* @param string
* A String to parse.
* @return A Map containing the key, value pairs from string.
* @throws IOException
* if file cannot be read.
*/
public static Map parse(String string) throws IOException {
StringReader stringReader = new StringReader(string);
BufferedReader input = new BufferedReader(stringReader);
return ConfigParser.parse(input);
}
/**
* Return a Map using the contents of the File file. If file cannot be read,
* use map instead, and write the result to where file should have been.
*
* @param file
* A File to attempt to parse.
* @param map
* A Map to use as the default, if file fails.
* @return A Map containing the key, value pairs from file, or if file
* cannot be read, map.
*/
public static Map parse(File file, Map map) {
Map result = new HashMap();
try {
result = ConfigParser.parse(file);
} catch (IOException exp) {
result = map;
try {
ConfigParser.write(result, file);
} catch (IOException exp2) {
}
}
return result;
}
/**
* Return a List where each element is a line from the BufferedReader input.
*
* @param input
* A BufferedReader to parse.
* @return A List consisting of one element for each line in input.
* @throws IOException
* if input cannot be read.
*/
public static List parseSubscriptions(BufferedReader input)
throws IOException {
List result = new LinkedList();
String inputLine = input.readLine();
while (inputLine != null) {
inputLine = ConfigParser.stripComments(inputLine).trim();
if (inputLine.length() > 0) {
result.add(inputLine);
}
inputLine = input.readLine();
}
input.close();
return result;
}
/**
* Return a List where each element is a line from the File file.
*
* @param file
* A File to parse.
* @return A List consisting of one element for each line in file.
* @throws IOException
* if file cannot be read.
*/
public static List parseSubscriptions(File file) throws IOException {
FileInputStream fileStream = new FileInputStream(file);
BufferedReader input = new BufferedReader(new InputStreamReader(
fileStream));
return ConfigParser.parseSubscriptions(input);
}
/**
* Return a List where each element is a line from the String string.
*
* @param string
* A String to parse.
* @return A List consisting of one element for each line in string.
* @throws IOException
* if string cannot be read.
*/
public static List parseSubscriptions(String string) throws IOException {
StringReader stringReader = new StringReader(string);
BufferedReader input = new BufferedReader(stringReader);
return ConfigParser.parseSubscriptions(input);
}
/**
* Return a List using the contents of the File file. If file cannot be
* read, use list instead, and write the result to where file should have
* been.
*
* @param file
* A File to attempt to parse.
* @param string
* A List to use as the default, if file fails.
* @return A List consisting of one element for each line in file, or if
* file cannot be read, list.
*/
public static List parseSubscriptions(File file, List list) {
List result = new LinkedList();
try {
result = ConfigParser.parseSubscriptions(file);
} catch (IOException exp) {
result = list;
try {
ConfigParser.writeSubscriptions(result, file);
} catch (IOException exp2) {
}
}
return result;
}
/**
* Write contents of Map map to BufferedWriter output. Output is written
* with one key, value pair on each line, in the format: key=value.
*
* @param map
* A Map to write to output.
* @param output
* A BufferedWriter to write the Map to.
* @throws IOException
* if the BufferedWriter cannot be written to.
*/
public static void write(Map map, BufferedWriter output) throws IOException {
Iterator keyIter = map.keySet().iterator();
while (keyIter.hasNext()) {
String key = (String) keyIter.next();
output.write(key + "=" + (String) map.get(key));
output.newLine();
}
output.close();
}
/**
* Write contents of Map map to the File file. Output is written
* with one key, value pair on each line, in the format: key=value.
*
* @param map
* A Map to write to file.
* @param file
* A File to write the Map to.
* @throws IOException
* if file cannot be written to.
*/
public static void write(Map map, File file) throws IOException {
ConfigParser
.write(map, new BufferedWriter(new FileWriter(file, false)));
}
/**
* Write contents of List list to BufferedReader output. Output is written
* with each element of list on a new line.
*
* @param list
* A List to write to file.
* @param output
* A BufferedReader to write list to.
* @throws IOException
* if output cannot be written to.
*/
public static void writeSubscriptions(List list, BufferedWriter output)
throws IOException {
Iterator iter = list.iterator();
while (iter.hasNext()) {
output.write((String) iter.next());
output.newLine();
}
output.close();
}
/**
* Write contents of List list to File file. Output is written with each
* element of list on a new line.
*
* @param list
* A List to write to file.
* @param file
* A File to write list to.
* @throws IOException
* if output cannot be written to.
*/
public static void writeSubscriptions(List list, File file)
throws IOException {
ConfigParser.writeSubscriptions(list, new BufferedWriter(
new FileWriter(file, false)));
}
}

View File

@ -0,0 +1,192 @@
/*
* Copyright (c) 2004 Ragnarok
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
* IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
package addressbook;
import java.io.File;
import java.util.HashMap;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
/**
* Main class of addressbook. Performs updates, and runs the main loop.
*
* @author Ragnarok
*
*/
public class Daemon {
public static final String VERSION = "2.0.3";
private static final Daemon _instance = new Daemon();
/**
* Update the router and published address books using remote data from the
* subscribed address books listed in subscriptions.
*
* @param master
* The master AddressBook. This address book is never
* overwritten, so it is safe for the user to write to.
* @param router
* The router AddressBook. This is the address book read by
* client applications.
* @param published
* The published AddressBook. This address book is published on
* the user's eepsite so that others may subscribe to it.
* @param subscriptions
* A SubscriptionList listing the remote address books to update
* from.
* @param log
* The log to write changes and conflicts to.
*/
public void update(AddressBook master, AddressBook router,
File published, SubscriptionList subscriptions, Log log) {
router.merge(master, true, null);
Iterator iter = subscriptions.iterator();
while (iter.hasNext()) {
router.merge((AddressBook) iter.next(), false, log);
}
router.write();
if (published != null)
router.write(published);
subscriptions.write();
}
/**
* Run an update, using the Map settings to provide the parameters.
*
* @param settings
* A Map containg the parameters needed by update.
* @param home
* The directory containing addressbook's configuration files.
*/
public void update(Map settings, String home) {
File masterFile = new File(home, (String) settings
.get("master_addressbook"));
File routerFile = new File(home, (String) settings
.get("router_addressbook"));
File published = null;
if ("true".equals(settings.get("should_publish")))
published = new File(home, (String) settings
.get("published_addressbook"));
File subscriptionFile = new File(home, (String) settings
.get("subscriptions"));
File logFile = new File(home, (String) settings.get("log"));
File etagsFile = new File(home, (String) settings.get("etags"));
File lastModifiedFile = new File(home, (String) settings
.get("last_modified"));
AddressBook master = new AddressBook(masterFile);
AddressBook router = new AddressBook(routerFile);
List defaultSubs = new LinkedList();
// defaultSubs.add("http://i2p/NF2RLVUxVulR3IqK0sGJR0dHQcGXAzwa6rEO4WAWYXOHw-DoZhKnlbf1nzHXwMEJoex5nFTyiNMqxJMWlY54cvU~UenZdkyQQeUSBZXyuSweflUXFqKN-y8xIoK2w9Ylq1k8IcrAFDsITyOzjUKoOPfVq34rKNDo7fYyis4kT5bAHy~2N1EVMs34pi2RFabATIOBk38Qhab57Umpa6yEoE~rbyR~suDRvD7gjBvBiIKFqhFueXsR2uSrPB-yzwAGofTXuklofK3DdKspciclTVzqbDjsk5UXfu2nTrC1agkhLyqlOfjhyqC~t1IXm-Vs2o7911k7KKLGjB4lmH508YJ7G9fLAUyjuB-wwwhejoWqvg7oWvqo4oIok8LG6ECR71C3dzCvIjY2QcrhoaazA9G4zcGMm6NKND-H4XY6tUWhpB~5GefB3YczOqMbHq4wi0O9MzBFrOJEOs3X4hwboKWANf7DT5PZKJZ5KorQPsYRSq0E3wSOsFCSsdVCKUGsAAAA/i2p/hosts.txt");
defaultSubs.add("http://www.i2p2.i2p/hosts.txt");
SubscriptionList subscriptions = new SubscriptionList(subscriptionFile,
etagsFile, lastModifiedFile, defaultSubs, (String) settings
.get("proxy_host"), Integer.parseInt((String) settings.get("proxy_port")));
Log log = new Log(logFile);
update(master, router, published, subscriptions, log);
}
/**
* Load the settings, set the proxy, then enter into the main loop. The main
* loop performs an immediate update, and then an update every number of
* hours, as configured in the settings file.
*
* @param args
* Command line arguments. If there are any arguments provided,
* the first is taken as addressbook's home directory, and the
* others are ignored.
*/
public static void main(String[] args) {
_instance.run(args);
}
public void run(String[] args) {
String settingsLocation = "config.txt";
Map settings = new HashMap();
String home;
if (args.length > 0) {
home = args[0];
} else {
home = ".";
}
Map defaultSettings = new HashMap();
defaultSettings.put("proxy_host", "localhost");
defaultSettings.put("proxy_port", "4444");
defaultSettings.put("master_addressbook", "../userhosts.txt");
defaultSettings.put("router_addressbook", "../hosts.txt");
defaultSettings.put("published_addressbook", "../eepsite/docroot/hosts.txt");
defaultSettings.put("should_publish", "false");
defaultSettings.put("log", "log.txt");
defaultSettings.put("subscriptions", "subscriptions.txt");
defaultSettings.put("etags", "etags");
defaultSettings.put("last_modified", "last_modified");
defaultSettings.put("update_delay", "12");
File homeFile = new File(home);
if (!homeFile.exists()) {
boolean created = homeFile.mkdirs();
if (created)
System.out.println("INFO: Addressbook directory " + homeFile.getName() + " created");
else
System.out.println("ERROR: Addressbook directory " + homeFile.getName() + " could not be created");
}
File settingsFile = new File(homeFile, settingsLocation);
settings = ConfigParser.parse(settingsFile, defaultSettings);
// wait
try {
Thread.currentThread().sleep(5*60*1000);
} catch (InterruptedException ie) {}
while (true) {
long delay = Long.parseLong((String) settings.get("update_delay"));
if (delay < 1) {
delay = 1;
}
update(settings, home);
try {
synchronized (this) {
wait(delay * 60 * 60 * 1000);
}
} catch (InterruptedException exp) {
}
settings = ConfigParser.parse(settingsFile, defaultSettings);
}
}
/**
* Call this to get the addressbook to reread its config and
* refetch its subscriptions.
*/
public static void wakeup() {
synchronized (_instance) {
_instance.notifyAll();
}
}
}

View File

@ -0,0 +1,53 @@
/*
* Copyright (c) 2004 Ragnarok
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
* IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
package addressbook;
/**
* A thread that waits five minutes, then runs the addressbook daemon.
*
* @author Ragnarok
*
*/
public class DaemonThread extends Thread {
private String[] args;
/**
* Construct a DaemonThread with the command line arguments args.
* @param args
* A String array to pass to Daemon.main().
*/
public DaemonThread(String[] args) {
this.args = args;
}
/* (non-Javadoc)
* @see java.lang.Runnable#run()
*/
public void run() {
//try {
// Thread.sleep(5 * 60 * 1000);
//} catch (InterruptedException exp) {
//}
Daemon.main(this.args);
}
}

View File

@ -0,0 +1,76 @@
/*
* Copyright (c) 2004 Ragnarok
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
* IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
package addressbook;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.util.Date;
/**
* A simple log with automatic time stamping.
*
* @author Ragnarok
*
*/
public class Log {
private File file;
/**
* Construct a Log instance that writes to the File file.
*
* @param file
* A File for the log to write to.
*/
public Log(File file) {
this.file = file;
}
/**
* Write entry to a new line in the log, with appropriate time stamp.
*
* @param entry
* A String containing a message to append to the log.
*/
public void append(String entry) {
try {
BufferedWriter bw = new BufferedWriter(new FileWriter(this.file,
true));
String timestamp = new Date().toString();
bw.write(timestamp + " -- " + entry);
bw.newLine();
bw.close();
} catch (IOException exp) {
}
}
/**
* Return the File that the Log is writing to.
*
* @return The File that the log is writing to.
*/
public File getFile() {
return this.file;
}
}

View File

@ -0,0 +1,61 @@
/*
* Copyright (c) 2004 Ragnarok
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
* IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
package addressbook;
import javax.servlet.GenericServlet;
import javax.servlet.ServletConfig;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
/**
* A wrapper for addressbook to allow it to be started as a web application.
*
* @author Ragnarok
*
*/
public class Servlet extends GenericServlet {
/* (non-Javadoc)
* @see javax.servlet.Servlet#service(javax.servlet.ServletRequest, javax.servlet.ServletResponse)
*/
public void service(ServletRequest request, ServletResponse response) {
}
/* (non-Javadoc)
* @see javax.servlet.Servlet#init(javax.servlet.ServletConfig)
*/
public void init(ServletConfig config) {
try {
super.init(config);
} catch (ServletException exp) {
}
String[] args = new String[1];
args[0] = config.getInitParameter("home");
DaemonThread thread = new DaemonThread(args);
thread.setDaemon(true);
thread.start();
System.out.println("INFO: Starting Addressbook " + Daemon.VERSION);
System.out.println("INFO: config root under " + args[0]);
}
}

View File

@ -0,0 +1,105 @@
/*
* Copyright (c) 2004 Ragnarok
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
* IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
package addressbook;
/**
* A subscription to a remote address book.
*
* @author Ragnarok
*
*/
public class Subscription {
private String location;
private String etag;
private String lastModified;
/**
* Construct a Subscription pointing to the address book at location, that
* was last read at the time represented by etag and lastModified.
*
* @param location
* A String representing a url to a remote address book.
* @param etag
* The etag header that we recieved the last time we read this
* subscription.
* @param lastModified
* the last-modified header we recieved the last time we read
* this subscription.
*/
public Subscription(String location, String etag, String lastModified) {
this.location = location;
this.etag = etag;
this.lastModified = lastModified;
}
/**
* Return the location this Subscription points at.
*
* @return A String representing a url to a remote address book.
*/
public String getLocation() {
return this.location;
}
/**
* Return the etag header that we recieved the last time we read this
* subscription.
*
* @return A String containing the etag header.
*/
public String getEtag() {
return this.etag;
}
/**
* Set the etag header.
*
* @param etag
* A String containing the etag header.
*/
public void setEtag(String etag) {
this.etag = etag;
}
/**
* Return the last-modified header that we recieved the last time we read
* this subscription.
*
* @return A String containing the last-modified header.
*/
public String getLastModified() {
return this.lastModified;
}
/**
* Set the last-modified header.
*
* @param lastModified
* A String containing the last-modified header.
*/
public void setLastModified(String lastModified) {
this.lastModified = lastModified;
}
}

View File

@ -0,0 +1,73 @@
/*
* Copyright (c) 2004 Ragnarok
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
* IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
package addressbook;
import java.util.Iterator;
import java.util.List;
/**
* An iterator over the subscriptions in a SubscriptionList. Note that this iterator
* returns AddressBook objects, and not Subscription objects.
*
* @author Ragnarok
*/
public class SubscriptionIterator implements Iterator {
private Iterator subIterator;
private String proxyHost;
private int proxyPort;
/**
* Construct a SubscriptionIterator using the Subscriprions in List subscriptions.
*
* @param subscriptions
* List of Subscription objects that represent address books.
*/
public SubscriptionIterator(List subscriptions, String proxyHost, int proxyPort) {
this.subIterator = subscriptions.iterator();
this.proxyHost = proxyHost;
this.proxyPort = proxyPort;
}
/* (non-Javadoc)
* @see java.util.Iterator#hasNext()
*/
public boolean hasNext() {
return this.subIterator.hasNext();
}
/* (non-Javadoc)
* @see java.util.Iterator#next()
*/
public Object next() {
Subscription sub = (Subscription) this.subIterator.next();
return new AddressBook(sub, this.proxyHost, this.proxyPort);
}
/* (non-Javadoc)
* @see java.util.Iterator#remove()
*/
public void remove() {
throw new UnsupportedOperationException();
}
}

View File

@ -0,0 +1,129 @@
/*
* Copyright (c) 2004 Ragnarok
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
* IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
package addressbook;
import java.io.File;
import java.io.IOException;
import java.util.HashMap;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
/**
* A list of Subscriptions loaded from a file.
*
* @author Ragnarok
*
*/
public class SubscriptionList {
private List subscriptions;
private File etagsFile;
private File lastModifiedFile;
private String proxyHost;
private int proxyPort;
/**
* Construct a SubscriptionList using the urls from locationsFile and, if
* available, the etags and last-modified headers loaded from etagsFile and
* lastModifiedFile.
*
* @param locationsFile
* A file containing one url on each line.
* @param etagsFile
* A file containg the etag headers used for conditional GET. The
* file is in the format "url=etag".
* @param lastModifiedFile
* A file containg the last-modified headers used for conditional
* GET. The file is in the format "url=leastmodified".
*/
public SubscriptionList(File locationsFile, File etagsFile,
File lastModifiedFile, List defaultSubs, String proxyHost,
int proxyPort) {
this.subscriptions = new LinkedList();
this.etagsFile = etagsFile;
this.lastModifiedFile = lastModifiedFile;
this.proxyHost = proxyHost;
this.proxyPort = proxyPort;
Map etags;
Map lastModified;
String location;
List locations = ConfigParser.parseSubscriptions(locationsFile,
defaultSubs);
try {
etags = ConfigParser.parse(etagsFile);
} catch (IOException exp) {
etags = new HashMap();
}
try {
lastModified = ConfigParser.parse(lastModifiedFile);
} catch (IOException exp) {
lastModified = new HashMap();
}
Iterator iter = locations.iterator();
while (iter.hasNext()) {
location = (String) iter.next();
this.subscriptions.add(new Subscription(location, (String) etags
.get(location), (String) lastModified.get(location)));
}
}
/**
* Return an iterator over the AddressBooks represented by the Subscriptions
* in this SubscriptionList.
*
* @return A SubscriptionIterator.
*/
public SubscriptionIterator iterator() {
return new SubscriptionIterator(this.subscriptions, this.proxyHost,
this.proxyPort);
}
/**
* Write the etag and last-modified headers for each Subscription to files.
*/
public void write() {
Iterator iter = this.subscriptions.iterator();
Subscription sub;
Map etags = new HashMap();
Map lastModified = new HashMap();
while (iter.hasNext()) {
sub = (Subscription) iter.next();
if (sub.getEtag() != null) {
etags.put(sub.getLocation(), sub.getEtag());
}
if (sub.getLastModified() != null) {
lastModified.put(sub.getLocation(), sub.getLastModified());
}
}
try {
ConfigParser.write(etags, this.etagsFile);
ConfigParser.write(lastModified, this.lastModifiedFile);
} catch (IOException exp) {
}
}
}

View File

@ -0,0 +1,10 @@
# addressbook master address book. Addresses placed in this file take precidence
# over those in the router address book and in remote address books. If changes
# are made to this file, they will be reflected in the router address book and
# published address book after the next update.
#
# Do not make changes directly to the router address book, as they could be lost
# during an update.
#
# This file takes addresses in the hosts.txt format, i.e.
# example.i2p=somereallylongbase64thingAAAA

View File

@ -0,0 +1,7 @@
# Subscription list for addressbook
#
# Each entry is an absolute url to a file in hosts.txt format.
# Since the list is checked in order, url's should be listed in order of trust.
#
http://dev.i2p/i2p/hosts.txt
http://duck.i2p/hosts.txt

16
apps/addressbook/web.xml Normal file
View File

@ -0,0 +1,16 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE web-app
PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.2//EN"
"http://java.sun.com/j2ee/dtds/web-app_2.2.dtd">
<web-app>
<servlet>
<servlet-name>addressbook</servlet-name>
<servlet-class>addressbook.Servlet</servlet-class>
<init-param>
<param-name>home</param-name>
<param-value>./addressbook</param-value>
</init-param>
<load-on-startup>1</load-on-startup>
</servlet>
</web-app>

View File

@ -10,16 +10,14 @@
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.util.Properties;
import java.util.Timer;
import java.util.TimerTask;
import java.util.Properties;
import org.apache.log4j.DailyRollingFileAppender;
import org.apache.log4j.Level;
import org.apache.log4j.Logger;
import org.apache.log4j.PatternLayout;
import org.jibble.pircbot.IrcException;
import org.jibble.pircbot.NickAlreadyInUseException;
import org.jibble.pircbot.PircBot;
@ -111,12 +109,12 @@ public class Bogobot extends PircBot {
_botShutdownPassword = config.getProperty("botShutdownPassword", "take off eh");
_ircChannel = config.getProperty("ircChannel", "#i2p-chat");
_ircServer = config.getProperty("ircServer", "irc.duck.i2p");
_ircServer = config.getProperty("ircServer", "irc.postman.i2p");
_ircServerPort = Integer.parseInt(config.getProperty("ircServerPort", "6668"));
_isLoggerEnabled = Boolean.valueOf(config.getProperty("isLoggerEnabled", "true")).booleanValue();
_loggedHostnamePattern = config.getProperty("loggedHostnamePattern", "");
_logFilePrefix = config.getProperty("logFilePrefix", "irc.duck.i2p.i2p-chat");
_logFilePrefix = config.getProperty("logFilePrefix", "irc.postman.i2p.i2p-chat");
_logFileRotationInterval = config.getProperty("logFileRotationInterval", INTERVAL_DAILY);
_isRoundTripDelayEnabled = Boolean.valueOf(config.getProperty("isRoundTripDelayEnabled", "false")).booleanValue();

106
apps/fortuna/build.xml Normal file
View File

@ -0,0 +1,106 @@
<?xml version="1.0" encoding="UTF-8"?>
<project basedir="." default="all" name="fortuna">
<property name="cvs.base.dir" value="java/gnu-crypto" />
<property name="cvs.etc.dir" value="${cvs.base.dir}/etc" />
<property name="cvs.lib.dir" value="${cvs.base.dir}/lib" />
<property name="cvs.object.dir" value="${cvs.base.dir}/classes" />
<property name="cvs.base.crypto.object.dir" value="${cvs.object.dir}/gnu/crypto" />
<property name="cvs.cipher.object.dir" value="${cvs.base.crypto.object.dir}/cipher" />
<property name="cvs.hash.object.dir" value="${cvs.base.crypto.object.dir}/hash" />
<property name="cvs.prng.object.dir" value="${cvs.base.crypto.object.dir}/prng" />
<patternset id="fortuna.files">
<include name="${cvs.base.crypto.object.dir}/Registry.class"/>
<include name="${cvs.prng.object.dir}/Fortuna*.class"/>
<include name="${cvs.prng.object.dir}/BasePRNG.class"/>
<include name="${cvs.prng.object.dir}/RandomEventListener.class"/>
<include name="${cvs.prng.object.dir}/IRandom.class"/>
<include name="${cvs.cipher.object.dir}/CipherFactory.class"/>
<include name="${cvs.cipher.object.dir}/IBlockCipher.class"/>
<include name="${cvs.hash.object.dir}/HashFactory.class"/>
<include name="${cvs.hash.object.dir}/IMessageDigest.class"/>
</patternset>
<target name="all" depends="build,jar"
description="Create and test the custom Fortuna library" />
<target name="build" depends="-init,checkout"
description="Build the source and tests">
<ant dir="${cvs.base.dir}" target="jar" />
</target>
<target name="builddep" />
<target name="checkout" depends="-init" unless="cvs.source.available"
description="Check out GNU Crypto sources from CVS HEAD">
<cvs cvsRoot=":ext:anoncvs@savannah.gnu.org:/cvsroot/gnu-crypto"
cvsRsh="ssh"
dest="java"
package="gnu-crypto" />
</target>
<target name="clean"
description="Remove generated tests and object files">
<ant dir="${cvs.base.dir}" target="clean" />
</target>
<target name="cleandep" />
<target name="compile" />
<target name="distclean" depends="clean"
description="Remove all generated files">
<delete dir="build" />
<delete dir="jartemp" />
<!--
Annoyingly the GNU Crypto distclean task called here doesn't clean
*all* derived files from java/gnu-crypto/lib like it should.....
-->
<ant dir="${cvs.base.dir}" target="distclean" />
<!--
.....and so we mop up the rest ourselves.
-->
<delete dir="${cvs.lib.dir}" />
</target>
<target name="-init">
<available property="cvs.source.available" file="${cvs.base.dir}" />
</target>
<target name="jar" depends="build"
description="Create the custom Fortuna jar library">
<delete dir="build" />
<delete dir="jartemp" />
<mkdir dir="build" />
<mkdir dir="jartemp/${cvs.object.dir}" />
<copy todir="jartemp">
<fileset dir=".">
<patternset refid="fortuna.files" />
</fileset>
</copy>
<jar basedir="jartemp/${cvs.object.dir}" jarfile="build/fortuna.jar">
<manifest>
<section name="fortuna">
<attribute name="Implementation-Title" value="I2P Custom GNU Crypto Fortuna Library" />
<attribute name="Implementation-Version" value="CVS HEAD" />
<attribute name="Implementation-Vendor" value="Free Software Foundation" />
<attribute name="Implementation-Vendor-Id" value="FSF" />
<attribute name="Implementation-URL" value="http://www.gnu.org/software/gnu-crypto" />
</section>
</manifest>
</jar>
<delete dir="jartemp" />
</target>
<target name="test" depends="jar"
description="Perform crypto tests on custom Fortuna jar library" />
<!--
Add this when Fortuna tests are added to GNU Crypto, else write some
-->
<target name="update" depends="checkout"
description="Update GNU Crypto sources to latest CVS HEAD">
<cvs command="update -d" cvsRsh="ssh" dest="java/gnu-crypto" />
</target>
</project>

View File

@ -1,39 +0,0 @@
The Heartbeat GUI loads up the stat files generated by the Heartbeat
engine and renders them visually, offering a way to drill through different
data points and take snapshots as things change (by saving particular stat
files for later). The GUI itself doesn't need to be on the same machine
as the Heartbeat engine - it pulls the stat files through any URL - even
through the EepProxy.
An example Heartbeat GUI config file follows
# how often do we want to pull new data to render
refreshFrequency=60
## for each peer test we may want to include in the GUI:
# where to find the current stat file (URL or filename)
stat.0.location=http://dev.i2p.net/stats/heartbeatStat_khWY_30s_1kb.txt
## optional entries for each peer test describing what we want shown
## (and how we want it shown)
# do we want to plot the send time (from when the ping was sent until the pong server got it)?
stat.0.plot.current.send=true
# do we want to plot the receive time (from when the pong was sent until reception)?
stat.0.plot.current.receive=true
# do we want to plot the lost messages?
stat.0.plot.current.lost=true
# what color should the current lines be rendered in?
stat.0.plot.current.color=BLUE
## optional entries for each peer test describing what averages we want
## rendered
# plot 1 minute send average?
stat.0.plot.1m.send=true
# plot 1 minute receive average?
stat.0.plot.1m.receive=true
# plot 1 minute lost message average?
stat.0.plot.1m.lost=true
# what color should the 1 minute averages be rendered as?
stat.0.plot.1m.color=GREEN
## repeated for all of the averaged periods, e.g.
## stat.0.plot.30m, .60m, 1440m (1 day)
There may be some other options, such as where to store snapshot files, whether
to generate PNG images, etc.

View File

@ -1,122 +0,0 @@
Heartbeat
Application layer tool for monitoring the long term health of the
network by periodically testing peers, generating stats, and
rendering them visually. The engine (both server and client) should
work headless and seperate from the GUI, exposing the data in a simple
to parse (and human readable) text file for each peer being tested.
The GUI then periodically refreshes itself by loading those files (
either locally or from a URL) and renders the current state accordingly,
giving users a way to check that the network is alive, devs a tool to
both monitor the state of the network and to debug different situations (by
accessing the stat file - either live or archived).
The heartbeat configuration file is organized as a standard properties
file (by default located at heartbeat.config, but that can be overridden by
passing a filename as the first argument to the Heartbeat command):
# where the router is located (default is localhost)
i2cpHost=localhost
# I2CP port for the router (default is 7654)
i2cpPort=4001
# How many hops we want the router to put in our tunnels (default is 2)
numHops=2
# where our private destination keys are located - if this doesn't exist,
# a new one will be created and saved there (by default, heartbeat.keys)
privateDestinationFile=heartbeat_r2.keys
## peer tests configured below:
# destination peer for test 0
peer.0.peer=[destination in base64]
# where will we write out the stat data?
peer.0.statFile=heartbeatStat_khWY_30s_1kb.txt
# how many minutes will we keep stats for?
peer.0.statDuration=30
# how often will we write out new stat data (in seconds)?
peer.0.statFrequency=60
# how often will we send a ping to the peer (in seconds)?
peer.0.sendFrequency=30
# how many bytes will be included in the ping?
peer.0.sendSize=1024
# take a guess...
peer.0.comment=Test with localhost sending 1KB of data every 30 seconds
# we can keep track of a few moving averages - this value includes a whitespace
# delimited list of numbers, each specifying a period to calculate the average
# over (in minutes)
peer.0.averagePeriods=1 5 30
## repeat the peer.0.* for as many tests as desired, incrementing as necessary
If there are no peer.* lines, it will simply run a pong server. If any data is
missing, it will use the defaults (though there are no defaults for peer.* lines) -
running the Heartbeat app with no heartbeat configuration file whatsoever will create
a new pong server (storing its keys at heartbeat.keys) and using the I2P router at
localhost:7654.
The stat file generated for each set of peer.n.* lines contains the current state
of the test, its averages, as well as any other interesting data points. An example
stat file follows (hopefully it is self explanatory):
peer khWYqCETu9YtPUvGV92ocsbEW5DezhKlIG7ci8RLX3g=
local u-9hlR1ik2hemXf0HvKMfeRgrS86CbNQh25e7XBhaQE=
peerDest [base 64 of the full destination]
localDest [base 64 of the full destination]
numTunnelHops 2
comment Test with localhost sending 30KB every 20 seconds
sendFrequency 20
sendSize 30720
sessionStart 20040409.22:51:10.915
currentTime 20040409.23:31:39.607
numPending 2
lifetimeSent 118
lifetimeRecv 113
#averages minutes sendMs recvMs numLost
periodAverage 1 1843 771 0
periodAverage 5 786 752 1
periodAverage 30 855 735 3
#action status date and time sent sendMs replyMs
EVENT OK 20040409.23:21:44.742 691 670
EVENT OK 20040409.23:22:05.201 671 581
EVENT OK 20040409.23:22:26.301 1182 1452
EVENT OK 20040409.23:22:47.322 24304 1723
EVENT OK 20040409.23:23:08.232 2293 1081
EVENT OK 20040409.23:23:29.332 1392 641
EVENT OK 20040409.23:23:50.262 641 761
EVENT OK 20040409.23:24:11.102 651 701
EVENT OK 20040409.23:24:31.401 841 621
EVENT OK 20040409.23:24:52.061 651 681
EVENT OK 20040409.23:25:12.480 701 1623
EVENT OK 20040409.23:25:32.990 1442 1212
EVENT OK 20040409.23:25:54.230 591 631
EVENT OK 20040409.23:26:14.620 620 691
EVENT OK 20040409.23:26:35.199 1793 1432
EVENT OK 20040409.23:26:56.570 661 641
EVENT OK 20040409.23:27:17.200 641 660
EVENT OK 20040409.23:27:38.120 611 921
EVENT OK 20040409.23:27:58.699 831 621
EVENT OK 20040409.23:28:19.559 801 661
EVENT OK 20040409.23:28:40.279 601 611
EVENT OK 20040409.23:29:00.648 601 621
EVENT OK 20040409.23:29:21.288 701 661
EVENT LOST 20040409.23:29:41.828
EVENT LOST 20040409.23:30:02.327
EVENT LOST 20040409.23:30:22.656
EVENT OK 20040409.23:31:24.305 1843 771
The actual ping and pong messages sent are formatted trivially -
ping messages contain
$from $series $type $sentOn $size $payload
while pong messages contain
$from $series $type $sentOn $receivedOn $size $payload
$series is a number describing the sending client's test (so that you can
ping the same peer with different configurations concurrently, varying things
like the frequency and size of the message, window, etc).
They are sent as raw binary messages though, so see I2PAdapter.sendPing(..)
and I2PAdapter.sendPong(..) for the details.
To get valid measurements, of course, you will want to make sure that
both the heartbeat client and pong server have synchronized clocks (even
more so than I2P requires). It is highly recommended that only NTP
synchronized peers be used for heartbeat tests.

View File

@ -1,66 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<project basedir="." default="all" name="heartbeat">
<target name="all" depends="clean, buildGUI" />
<target name="build" depends="builddep, jar" />
<target name="buildGUI" depends="build, jarGUI" />
<target name="builddep">
<ant dir="../../../core/java/" target="build" />
</target>
<target name="compile">
<mkdir dir="./build" />
<mkdir dir="./build/obj" />
<javac srcdir="./src" debug="true" deprecation="on" source="1.3" target="1.3" destdir="./build/obj" includes="**/*.java" excludes="net/i2p/heartbeat/gui/**" classpath="../../../core/java/build/i2p.jar" />
</target>
<target name="compileGUI">
<mkdir dir="./build" />
<mkdir dir="./build/obj" />
<javac debug="true" source="1.3" target="1.3" deprecation="on" destdir="./build/obj">
<src path="src/" />
<classpath path="../../../core/java/build/i2p.jar" />
<classpath path="../../jfreechart/jfreechart-0.9.17/lib/jcommon-0.9.2.jar" />
<classpath path="../../jfreechart/jfreechart-0.9.17/lib/log4j-1.2.8.jar" />
<classpath path="../../jfreechart/jfreechart-0.9.17/jfreechart-0.9.17.jar" />
</javac>
</target>
<target name="jar" depends="compile">
<jar destfile="./build/heartbeat.jar" basedir="./build/obj" includes="**/*.class">
<manifest>
<attribute name="Main-Class" value="net.i2p.heartbeat.Heartbeat" />
<attribute name="Class-Path" value="i2p.jar heartbeat.jar" />
</manifest>
</jar>
</target>
<target name="jarGUI" depends="compileGUI">
<copy file="../../jfreechart/jfreechart-0.9.17/jfreechart-0.9.17.jar" todir="build/" />
<copy file="../../jfreechart/jfreechart-0.9.17/lib/log4j-1.2.8.jar" todir="build/" />
<copy file="../../jfreechart/jfreechart-0.9.17/lib/jcommon-0.9.2.jar" todir="build/" />
<jar destfile="./build/heartbeatGUI.jar" basedir="./build/obj" includes="**">
<manifest>
<attribute name="Main-Class" value="net.i2p.heartbeat.gui.HeartbeatMonitor" />
<attribute name="Class-Path" value="log4j-1.2.8.jar jcommon-0.9.2.jar jfreechart-0.9.17.jar heartbeatGUI.jar i2p.jar" />
</manifest>
</jar>
<echo message="You will need to copy the log4j, jcommon, and jfreechart jar files into your lib dir" />
</target>
<target name="javadoc">
<mkdir dir="./build" />
<mkdir dir="./build/javadoc" />
<javadoc
sourcepath="./src:../../../core/java/src:../../../core/java/test" destdir="./build/javadoc"
packagenames="*"
use="true"
access="package"
splitindex="true"
windowtitle="I2P heartbeat monitor" />
</target>
<target name="clean">
<delete dir="./build" />
</target>
<target name="cleandep" depends="clean">
<ant dir="../../../core/java/" target="cleandep" />
<ant dir="../../../core/java/" target="cleandep" />
</target>
<target name="distclean" depends="clean">
<ant dir="../../../core/java/" target="distclean" />
</target>
</project>

View File

@ -1,468 +0,0 @@
package net.i2p.heartbeat;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Properties;
import java.util.StringTokenizer;
import net.i2p.data.DataFormatException;
import net.i2p.data.Destination;
import net.i2p.util.Log;
/**
* Define the configuration for testing against one particular peer as a client
*/
public class ClientConfig {
private static final Log _log = new Log(ClientConfig.class);
private Destination _peer;
private Destination _us;
private String _statFile;
private int _statDuration;
private int _statFrequency;
private int _sendFrequency;
private int _sendSize;
private int _numHops;
private String _comment;
private int _averagePeriods[];
/**
* @seeRoutine ClientConfig#load
* @seeRoutine ClientConfig#store
*/
public static final String PROP_PREFIX = "peer.";
/**
* @seeRoutine ClientConfig#load
* @seeRoutine ClientConfig#store
*/
public static final String PROP_PEER = ".peer";
/**
* @seeRoutine ClientConfig#load
* @seeRoutine ClientConfig#store
*/
public static final String PROP_STATFILE = ".statFile";
/**
* @seeRoutine ClientConfig#load
* @seeRoutine ClientConfig#store
*/
public static final String PROP_STATDURATION = ".statDuration";
/**
* @seeRoutine ClientConfig#load
* @seeRoutine ClientConfig#store
*/
public static final String PROP_STATFREQUENCY = ".statFrequency";
/**
* @seeRoutine ClientConfig#load
* @seeRoutine ClientConfig#store
*/
public static final String PROP_SENDFREQUENCY = ".sendFrequency";
/**
* @seeRoutine ClientConfig#load
* @seeRoutine ClientConfig#store
*/
public static final String PROP_SENDSIZE = ".sendSize";
/**
* @seeRoutine ClientConfig#load
* @seeRoutine ClientConfig#store
*/
public static final String PROP_COMMENT = ".comment";
/**
* @seeRoutine ClientConfig#load
* @seeRoutine ClientConfig#store
*/
public static final String PROP_AVERAGEPERIODS = ".averagePeriods";
/**
* Default constructor...
*/
public ClientConfig() {
this(null, null, null, -1, -1, -1, -1, 0, null, null);
}
/**
* Create a dummy client config to be fetched from the specified location
* @param location the location to fetch from
*/
public ClientConfig(String location) {
this(null, null, location, -1, -1, -1, -1, 0, null, null);
}
/**
* @param peer who we will test against
* @param us who we are
* @param statLocation where the stat data should be stored/fetched
* @param duration how many minutes to keep events for
* @param statFreq how often to write out stats
* @param sendFreq how often to send pings
* @param sendSize how large the pings should be
* @param numHops how many hops is the current Heartbeat app using
* @param comment describe this test
* @param averagePeriods list of minutes to summarize over
*/
public ClientConfig(Destination peer, Destination us, String statLocation, int duration, int statFreq, int sendFreq,
int sendSize, int numHops, String comment, int averagePeriods[]) {
_peer = peer;
_us = us;
_statFile = statLocation;
_statDuration = duration;
_statFrequency = statFreq;
_sendFrequency = sendFreq;
_sendSize = sendSize;
_numHops = numHops;
_comment = comment;
_averagePeriods = averagePeriods;
}
/**
* Retrieves the peer to test against
*
* @return the Destination (peer)
*/
public Destination getPeer() {
return _peer;
}
/**
* Sets the peer to test against
*
* @param peer the Destination (peer)
*/
public void setPeer(Destination peer) {
_peer = peer;
}
/**
* Retrieves who we are when we test
*
* @return the Destination (us)
*/
public Destination getUs() {
return _us;
}
/**
* Sets who we are when we test
*
* @param us the Destination (us)
*/
public void setUs(Destination us) {
_us = us;
}
/**
* Retrieves the location to write the current stats to
*
* @return the name of the file
*/
public String getStatFile() {
return _statFile;
}
/**
* Sets the name of the location we write the current stats to
*
* @param statFile the name of the file
*/
public void setStatFile(String statFile) {
_statFile = statFile;
}
/**
* Retrieves how many minutes of statistics should be maintained within the window for this client
*
* @return the number of minutes
*/
public int getStatDuration() {
return _statDuration;
}
/**
* Sets how many minutes of statistics should be maintained within the window for this client
*
* @param durationMinutes the number of minutes
*/
public void setStatDuration(int durationMinutes) {
_statDuration = durationMinutes;
}
/**
* Retrieves how frequently the stats are written out (in seconds)
*
* @return the frequency in seconds
*/
public int getStatFrequency() {
return _statFrequency;
}
/**
* Sets how frequently the stats are written out (in seconds)
*
* @param freqSeconds the frequency in seconds
*/
public void setStatFrequency(int freqSeconds) {
_statFrequency = freqSeconds;
}
/**
* Retrieves how frequenty we send messages to the peer (in seconds)
*
* @return the frequency in seconds
*/
public int getSendFrequency() {
return _sendFrequency;
}
/**
* Sets how frequenty we send messages to the peer (in seconds)
*
* @param freqSeconds the frequency in seconds
*/
public void setSendFrequency(int freqSeconds) {
_sendFrequency = freqSeconds;
}
/**
* Retrieves how many bytes the ping messages should be (min values ~700, max ~32KB)
*
* @return the size in bytes
*/
public int getSendSize() {
return _sendSize;
}
/**
* Sets how many bytes the ping messages should be (min values ~700, max ~32KB)
*
* @param numBytes the size in bytes
*/
public void setSendSize(int numBytes) {
_sendSize = numBytes;
}
/**
* Retrieves the brief, 1 line description of the test. Useful comments are along the lines of "The peer is located on a fast router and connection with 2
* hop tunnels".
*
* @return the brief comment
*/
public String getComment() {
return _comment;
}
/**
* Sets a brief, 1 line description (comment) of the test.
*
* @param comment the brief comment
*/
public void setComment(String comment) {
_comment = comment;
}
/**
* Retrieves the periods that the client's tests should be averaged over.
*
* @return list of periods (in minutes) that the data should be averaged over, or null
*/
public int[] getAveragePeriods() {
return _averagePeriods;
}
/**
* Sets the periods that the client's tests should be averaged over.
*
* @param periods the list of periods (in minutes) that the data should be averaged over, or null
*/
public void setAveragePeriods(int periods[]) {
_averagePeriods = periods;
}
/**
* Make sure we're keeping track of the average over the given time period.
*
* @param minutes how many minutes to monitor
*/
public void addAveragePeriod(int minutes) {
if (_averagePeriods != null) {
for (int i = 0; i < _averagePeriods.length; i++) {
if (_averagePeriods[i] == minutes)
return;
}
}
int numPeriods = 1;
if (_averagePeriods != null)
numPeriods += _averagePeriods.length;
int periods[] = new int[numPeriods];
if (_averagePeriods != null)
System.arraycopy(_averagePeriods, 0, periods, 0, _averagePeriods.length);
periods[periods.length-1] = minutes;
Arrays.sort(periods);
_averagePeriods = periods;
}
/**
* Retrieves how many hops this test engine is configured to use for its outbound and inbound tunnels
*
* @return the number of hops
*/
public int getNumHops() {
return _numHops;
}
/**
* Sets how many hops this test engine is configured to use for its outbound and inbound tunnels
*
* @param numHops the number of hops
*/
public void setNumHops(int numHops) {
_numHops = numHops;
}
/**
* Load the client config from the properties specified, deriving the current config entry from the peer number.
*
* @param clientConfig the properties to load from
* @param peerNum the number associated with the peer
* @return true if it was loaded correctly, false if there were errors
*/
public boolean load(Properties clientConfig, int peerNum) {
if ((clientConfig == null) || (peerNum < 0)) return false;
String peerVal = clientConfig.getProperty(PROP_PREFIX + peerNum + PROP_PEER);
String statFileVal = clientConfig.getProperty(PROP_PREFIX + peerNum + PROP_STATFILE);
String statDurationVal = clientConfig.getProperty(PROP_PREFIX + peerNum + PROP_STATDURATION);
String statFrequencyVal = clientConfig.getProperty(PROP_PREFIX + peerNum + PROP_STATFREQUENCY);
String sendFrequencyVal = clientConfig.getProperty(PROP_PREFIX + peerNum + PROP_SENDFREQUENCY);
String sendSizeVal = clientConfig.getProperty(PROP_PREFIX + peerNum + PROP_SENDSIZE);
String commentVal = clientConfig.getProperty(PROP_PREFIX + peerNum + PROP_COMMENT);
String periodsVal = clientConfig.getProperty(PROP_PREFIX + peerNum + PROP_AVERAGEPERIODS);
if ((peerVal == null) || (statFileVal == null) || (statDurationVal == null) || (statFrequencyVal == null)
|| (sendFrequencyVal == null) || (sendSizeVal == null)) {
if (_log.shouldLog(Log.DEBUG)) {
_log.debug("Peer number " + peerNum + " does not exist");
}
return false;
}
try {
int duration = getInt(statDurationVal);
int statFreq = getInt(statFrequencyVal);
int sendFreq = getInt(sendFrequencyVal);
int sendSize = getInt(sendSizeVal);
if ((duration <= 0) || (statFreq <= 0) || (sendFreq < 0) || (sendSize <= 0)) {
if (_log.shouldLog(Log.WARN)) {
_log.warn("Invalid client config: duration [" + statDurationVal + "] stat frequency ["
+ statFrequencyVal + "] send frequency [" + sendFrequencyVal + "] send size ["
+ sendSizeVal + "]");
}
return false;
}
statFileVal = statFileVal.trim();
if (statFileVal.length() <= 0) {
if (_log.shouldLog(Log.WARN)) {
_log.warn("Stat file is blank for peer " + peerNum);
}
return false;
}
Destination d = new Destination();
d.fromBase64(peerVal);
if (commentVal == null) {
commentVal = "";
}
commentVal = commentVal.trim();
commentVal = commentVal.replace('\n', '_');
List periods = new ArrayList(4);
if (periodsVal != null) {
StringTokenizer tok = new StringTokenizer(periodsVal);
while (tok.hasMoreTokens()) {
String periodVal = tok.nextToken();
int minutes = getInt(periodVal);
if (minutes > 0) {
periods.add(new Integer(minutes));
}
}
}
int avgPeriods[] = new int[periods.size()];
for (int i = 0; i < periods.size(); i++) {
avgPeriods[i] = ((Integer) periods.get(i)).intValue();
}
_comment = commentVal;
_statDuration = duration;
_statFrequency = statFreq;
_sendFrequency = sendFreq;
_sendSize = sendSize;
_statFile = statFileVal;
_peer = d;
_averagePeriods = avgPeriods;
return true;
} catch (DataFormatException dfe) {
_log.error("Peer destination for " + peerNum + " was invalid: " + peerVal);
return false;
}
}
/**
* Store the client config to the properties specified, deriving the current config entry from the peer number.
*
* @param clientConfig the properties to store to
* @param peerNum the number associated with the peer
* @return true if it was stored correctly, false if there were errors
*/
public boolean store(Properties clientConfig, int peerNum) {
if ((_peer == null) || (_sendFrequency < 0) || (_sendSize <= 0) || (_statDuration <= 0)
|| (_statFrequency <= 0) || (_statFile == null)) { return false; }
String comment = _comment;
if (comment == null) {
comment = "";
}
comment = comment.trim();
comment = comment.replace('\n', '_');
StringBuffer buf = new StringBuffer(32);
if (_averagePeriods != null) {
for (int i = 0; i < _averagePeriods.length; i++) {
buf.append(_averagePeriods[i]).append(' ');
}
}
clientConfig.setProperty(PROP_PREFIX + peerNum + PROP_PEER, _peer.toBase64());
clientConfig.setProperty(PROP_PREFIX + peerNum + PROP_STATFILE, _statFile);
clientConfig.setProperty(PROP_PREFIX + peerNum + PROP_STATDURATION, _statDuration + "");
clientConfig.setProperty(PROP_PREFIX + peerNum + PROP_STATFREQUENCY, _statFrequency + "");
clientConfig.setProperty(PROP_PREFIX + peerNum + PROP_SENDFREQUENCY, _sendFrequency + "");
clientConfig.setProperty(PROP_PREFIX + peerNum + PROP_SENDSIZE, _sendSize + "");
clientConfig.setProperty(PROP_PREFIX + peerNum + PROP_COMMENT, comment);
clientConfig.setProperty(PROP_PREFIX + peerNum + PROP_AVERAGEPERIODS, buf.toString());
return true;
}
private static final int getInt(String val) {
if (val == null) return -1;
try {
int i = Integer.parseInt(val);
return i;
} catch (NumberFormatException nfe) {
if (_log.shouldLog(Log.DEBUG)) {
_log.debug("Value [" + val + "] is not a valid integer");
}
return -1;
}
}
}

View File

@ -1,133 +0,0 @@
package net.i2p.heartbeat;
import net.i2p.data.Destination;
import net.i2p.util.Clock;
import net.i2p.util.I2PThread;
import net.i2p.util.Log;
/**
* Responsible for actually conducting the tests, coordinating the storing of the
* stats, and the management of the rates. This has its own thread specific for
* pumping data around as well.
*
*/
class ClientEngine {
private static final Log _log = new Log(ClientEngine.class);
/** who can send our pings? */
private Heartbeat _heartbeat;
/** actual test state */
private PeerData _data;
/** have we been stopped? */
private boolean _active;
/** used to generate engine IDs */
private static int __id = 0;
/** this engine's id, unique to the {test,sendingClient,startTime} */
private int _id;
private static PeerDataWriter writer = new PeerDataWriter();
/**
* Create a new engine that will send its pings through the given heartbeat
* system, and will coordinate the test according to the configuration specified.
* @param heartbeat the Heartbeat to send pings through
* @param config the Configuration to load configuration from =p
*/
public ClientEngine(Heartbeat heartbeat, ClientConfig config) {
_heartbeat = heartbeat;
_data = new PeerData(config);
_active = false;
_id = ++__id;
}
/** stop sending any more pings or writing any more state */
public void stopEngine() {
_active = false;
if (_log.shouldLog(Log.INFO))
_log.info("Stopping engine talking to peer " + _data.getConfig().getPeer().calculateHash().toBase64());
}
/** start up the test (this does not block, as it fires up the test thread) */
public void startEngine() {
_active = true;
I2PThread t = new I2PThread(new ClientRunner());
t.setName("HeartbeatClient " + _id);
t.start();
}
/**
* Who are we testing?
* @return the Destination (peer) we're testing
*/
public Destination getPeer() {
return _data.getConfig().getPeer();
}
/**
* What is our series identifier (used to locally identify a test)
* @return the series identifier
*/
public int getSeriesNum() {
return _id;
}
/**
* receive notification from the heartbeat system that a pong was received in
* reply to a ping we have sent.
*
* @param sentOn when did we send the ping?
* @param replyOn when did the peer send the pong?
*/
public void receivePong(long sentOn, long replyOn) {
_data.pongReceived(sentOn, replyOn);
}
/** fire off a new ping */
private void doSend() {
long now = Clock.getInstance().now();
_data.addPing(now);
_heartbeat.sendPing(_data.getConfig().getPeer(), _id, now, _data.getConfig().getSendSize());
}
/** our actual heartbeat pumper - this drives the test */
private class ClientRunner implements Runnable {
/**
* @see java.lang.Runnable#run()
*/
public void run() {
if (_log.shouldLog(Log.INFO))
_log.info("Starting engine talking to peer " + _data.getConfig().getPeer().calculateHash().toBase64());
// when do we need to send the next PING?
long nextSend = Clock.getInstance().now();
// when do we need to write out the next state data?
long nextWrite = Clock.getInstance().now();
while (_active) {
if (Clock.getInstance().now() >= nextSend) {
doSend();
nextSend = Clock.getInstance().now() + _data.getConfig().getSendFrequency() * 1000;
}
if (Clock.getInstance().now() >= nextWrite) {
boolean written = writer.persist(_data);
if (!written) {
if (_log.shouldLog(Log.ERROR)) _log.error("Unable to write the client state data");
} else {
if (_log.shouldLog(Log.DEBUG)) _log.debug("Client state data written");
}
}
_data.cleanup();
long timeToWait = nextSend - Clock.getInstance().now();
if (timeToWait > 0) {
try {
Thread.sleep(timeToWait);
} catch (InterruptedException ie) {
}
}
}
}
}
}

View File

@ -1,254 +0,0 @@
package net.i2p.heartbeat;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.util.Date;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import java.util.Properties;
import net.i2p.data.Destination;
import net.i2p.util.Log;
/**
* Main driver for the heartbeat engine, loading 0 or more tests, firing
* up a ClientEngine for each, and serving as a pong server. If there isn't
* a configuration file, or if the configuration file doesn't specify any tests,
* it simply sits around as a pong server, passively responding to whatever is
* sent its way. <p />
*
* The config file format is examplified below:
* <pre>
* # where the router is located (default is localhost)
* i2cpHost=localhost
* # I2CP port for the router (default is 7654)
* i2cpPort=4001
* # How many hops we want the router to put in our tunnels (default is 2)
* numHops=2
* # where our private destination keys are located - if this doesn't exist,
* # a new one will be created and saved there (by default, heartbeat.keys)
* privateDestinationFile=heartbeat_r2.keys
* # where do we want to export the plain base64 of our destination?
* publicDestinationFile=heartbeat_r2.txt
*
* ## peer tests configured below:
*
* # destination peer for test 0
* peer.0.peer=[destination in base64]
* # where will we write out the stat data?
* peer.0.statFile=heartbeatStat_khWY_30s_1kb.txt
* # how many minutes will we keep stats for?
* peer.0.statDuration=30
* # how often will we write out new stat data (in seconds)?
* peer.0.statFrequency=60
* # how often will we send a ping to the peer (in seconds)?
* peer.0.sendFrequency=30
* # how many bytes will be included in the ping?
* peer.0.sendSize=1024
* # take a guess...
* peer.0.comment=Test with localhost sending 1KB of data every 30 seconds
* # we can keep track of a few moving averages - this value includes a whitespace
* # delimited list of numbers, each specifying a period to calculate the average
* # over (in minutes)
* peer.0.averagePeriods=1 5 30
* ## repeat the peer.0.* for as many tests as desired, incrementing as necessary
* </pre>
*
*/
public class Heartbeat {
private static final Log _log = new Log(Heartbeat.class);
/** location containing this heartbeat's config */
private String _configFile;
/** clientNum (Integer) to ClientConfig mapping */
private Map _clientConfigs;
/** series num (Integer) to ClientEngine mapping */
private Map _clientEngines;
/** helper class for managing our I2P send/receive and message formatting */
private I2PAdapter _adapter;
/** our own callback that the I2PAdapter notifies on ping or pong messages */
private PingPongAdapter _eventAdapter;
/** if there are no command line arguments, load the config from "heartbeat.config" */
public static final String CONFIG_FILE_DEFAULT = "heartbeat.config";
/**
* build up a new heartbeat manager, but don't actually do anything
* @param configFile the name of the configuration file
*/
public Heartbeat(String configFile) {
_configFile = configFile;
_clientConfigs = new HashMap();
_clientEngines = new HashMap();
_eventAdapter = new PingPongAdapter();
_adapter = new I2PAdapter();
_adapter.setListener(_eventAdapter);
}
private Heartbeat() {
}
/** load up the config data (but don't build any engines or start them up) */
public void loadConfig() {
Properties props = new Properties();
FileInputStream fin = null;
File configFile = new File(_configFile);
if (configFile.exists()) {
try {
fin = new FileInputStream(_configFile);
props.load(fin);
} catch (IOException ioe) {
if (_log.shouldLog(Log.ERROR)) {
_log.error("Error reading the config data", ioe);
}
} finally {
if (fin != null) try {
fin.close();
} catch (IOException ioe) {
}
}
}
loadBaseConfig(props);
loadClientConfigs(props);
}
/**
* send a ping message to the peer
*
* @param peer peer to ping
* @param seriesNum id used to keep track of multiple pings (of different size/frequency) to a peer
* @param now current time to be sent in the ping (so we can watch for it in the pong)
* @param size total message size to send
*/
void sendPing(Destination peer, int seriesNum, long now, int size) {
if (_adapter.getIsConnected()) _adapter.sendPing(peer, seriesNum, now, size);
}
/**
* load up the base data (I2CP config, etc)
* @param props the properties to load from
*/
private void loadBaseConfig(Properties props) {
_adapter.loadConfig(props);
}
/**
* load up all of the test config data
* @param props the properties to load from
* */
private void loadClientConfigs(Properties props) {
int i = 0;
while (true) {
ClientConfig config = new ClientConfig();
if (!config.load(props, i)) {
break;
}
_clientConfigs.put(new Integer(i), config);
i++;
}
}
/** connect to the network */
private void connect() {
boolean connected = _adapter.connect();
if (!connected) _log.error("Unable to connect to the router");
}
/** disconnect from the network */
private void disconnect() { /* UNUSED */
_adapter.disconnect();
}
/** start up all of the tests */
public void startEngines() {
for (Iterator iter = _clientConfigs.values().iterator(); iter.hasNext();) {
ClientConfig config = (ClientConfig) iter.next();
ClientEngine engine = new ClientEngine(this, config);
config.setUs(_adapter.getLocalDestination());
config.setNumHops(_adapter.getNumHops());
_clientEngines.put(new Integer(engine.getSeriesNum()), engine);
engine.startEngine();
}
}
/** stop all of the tests */
public void stopEngines() {
for (Iterator iter = _clientEngines.values().iterator(); iter.hasNext();) {
ClientEngine engine = (ClientEngine) iter.next();
engine.stopEngine();
}
_clientEngines.clear();
}
/**
* Fire up a new heartbeat system, waiting until, well, forever. Builds
* a new heartbeat system, loads the config, connects to the network, starts
* the engines, and then sits back and relaxes, responding to any pings and
* running any tests. <p />
*
* <code> <b>Usage: </b> Heartbeat [<i>configFileName</i>]</code> <p />
* @param args the list of args passed to the program from the command-line
*/
public static void main(String args[]) {
String configFile = CONFIG_FILE_DEFAULT;
if (args.length == 1) {
configFile = args[0];
}
if (_log.shouldLog(Log.INFO)) {
_log.info("Starting up with config file " + configFile);
}
Heartbeat heartbeat = new Heartbeat(configFile);
heartbeat.loadConfig();
heartbeat.connect();
heartbeat.startEngines();
Object o = new Object();
while (true) {
try {
synchronized (o) {
o.wait();
}
} catch (InterruptedException ie) {
}
}
}
/**
* Receive event notification from the I2PAdapter
*
*/
private class PingPongAdapter implements I2PAdapter.PingPongEventListener {
/**
* We were pinged, so always just send a pong back.
*
* @param from who sent us the ping?
* @param seriesNum what series did the sender specify?
* @param sentOn when did the sender say they sent their ping?
* @param data arbitrary payload data
*/
public void receivePing(Destination from, int seriesNum, Date sentOn, byte[] data) {
if (_adapter.getIsConnected()) {
_adapter.sendPong(from, seriesNum, sentOn, data);
}
}
/**
* We received a pong, so find the right client engine and tell it about the pong.
*
* @param from who sent us the pong
* @param seriesNum our client ID
* @param sentOn when did we send the ping?
* @param replyOn when did they send their pong?
* @param data the arbitrary data we sent in the ping (that they sent back in the pong)
*/
public void receivePong(Destination from, int seriesNum, Date sentOn, Date replyOn, byte[] data) {
ClientEngine engine = (ClientEngine) _clientEngines.get(new Integer(seriesNum));
if (engine.getPeer().equals(from)) {
engine.receivePong(sentOn.getTime(), replyOn.getTime());
}
}
}
}

View File

@ -1,604 +0,0 @@
package net.i2p.heartbeat;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.Date;
import java.util.Properties;
import net.i2p.I2PAppContext;
import net.i2p.I2PException;
import net.i2p.client.I2PClient;
import net.i2p.client.I2PClientFactory;
import net.i2p.client.I2PSession;
import net.i2p.client.I2PSessionException;
import net.i2p.client.I2PSessionListener;
import net.i2p.data.DataFormatException;
import net.i2p.data.DataHelper;
import net.i2p.data.Destination;
import net.i2p.util.Clock;
import net.i2p.util.Log;
/**
* Tie-in to the I2P SDK for the Heartbeat system, talking to the I2PSession and
* dealing with the raw ping and pong messages.
*
*/
class I2PAdapter {
private final static Log _log = new Log(I2PAdapter.class);
/** I2CP host */
private String _i2cpHost;
/** I2CP port */
private int _i2cpPort;
/** how long do we want our tunnels to be? */
private int _numHops;
/** filename containing the heartbeat engine's private destination info */
private String _privateDestFile;
/** filename to store the heartbeat engine's public destination in base64*/
private String _publicDestFile;
/** our destination */
private Destination _localDest;
/** who do we tell? */
private PingPongEventListener _listener;
/** how do we talk to the router */
private I2PSession _session;
/** object that receives our i2cp notifications from the session and tells us */
private I2PListener _i2pListener; /* UNUSED */
/**
* This config property tells us where the private destination data for our
* connection (or if it doesn't exist, where will we save it)
*/
private static final String DEST_FILE_PROP = "privateDestinationFile";
/** by default, the private destination data is in "heartbeat.keys" */
private static final String DEST_FILE_DEFAULT = "heartbeat.keys";
/** where will we export the public destination in base 64? */
private static final String PUBLIC_DEST_FILE_PROP = "publicDestinationFile";
/** where will we export the public destination in base 64? */
private static final String PUBLIC_DEST_FILE_DEFAULT = "heartbeat.txt";
/** This config property defines where the I2P router is */
private static final String I2CP_HOST_PROP = "i2cpHost";
/** by default, the I2P host is "localhost" */
private static final String I2CP_HOST_DEFAULT = "localhost";
/** This config property defines the I2CP port on the router */
private static final String I2CP_PORT_PROP = "i2cpPort";
/** by default, the I2CP port is 7654 */
private static final int I2CP_PORT_DEFAULT = 7654;
/** This property defines how many hops we want in our tunnels. */
public static final String NUMHOPS_PROP = "numHops";
/** by default, use 2 hop tunnels */
public static final int NUMHOPS_DEFAULT = 2;
/**
* Constructs an I2PAdapter . . .
*/
public I2PAdapter() {
_privateDestFile = null;
_publicDestFile = null;
_i2cpHost = null;
_i2cpPort = -1;
_localDest = null;
_listener = null;
_session = null;
_numHops = 0;
}
/**
* who are we?
* @return the destination (us)
*/
public Destination getLocalDestination() {
return _localDest;
}
/**
* who gets notified when we receive a ping or a pong?
* @return the event listener who gets notified
*/
public PingPongEventListener getListener() {
return _listener;
}
/**
* Sets who gets notified when we receive a ping or a pong
* @param listener the event listener to get notified
*/
public void setListener(PingPongEventListener listener) {
_listener = listener;
}
/**
* how many hops do we want in our tunnels?
* @return the number of hops
*/
public int getNumHops() {
return _numHops;
}
/**
* are we connected?
* @return true or false . . .
*/
public boolean getIsConnected() {
return _session != null;
}
/**
* Read in all of the config data
* @param props the properties to load from
*/
void loadConfig(Properties props) {
String privDestFile = props.getProperty(DEST_FILE_PROP, DEST_FILE_DEFAULT);
String pubDestFile = props.getProperty(PUBLIC_DEST_FILE_PROP, PUBLIC_DEST_FILE_DEFAULT);
String host = props.getProperty(I2CP_HOST_PROP, I2CP_HOST_DEFAULT);
String port = props.getProperty(I2CP_PORT_PROP, "" + I2CP_PORT_DEFAULT);
String numHops = props.getProperty(NUMHOPS_PROP, "" + NUMHOPS_DEFAULT);
int portNum = -1;
try {
portNum = Integer.parseInt(port);
} catch (NumberFormatException nfe) {
if (_log.shouldLog(Log.WARN)) {
_log.warn("Invalid I2CP port specified [" + port + "]");
}
portNum = I2CP_PORT_DEFAULT;
}
int hops = -1;
try {
hops = Integer.parseInt(numHops);
} catch (NumberFormatException nfe) {
if (_log.shouldLog(Log.WARN)) {
_log.warn("Invalid # hops specified [" + numHops + "]");
}
hops = NUMHOPS_DEFAULT;
}
_numHops = hops;
_privateDestFile = privDestFile;
_publicDestFile = pubDestFile;
_i2cpHost = host;
_i2cpPort = portNum;
}
/**
* write out the config to the props
* @param props the properties to write to
*/
void storeConfig(Properties props) {
if (_privateDestFile != null) {
props.setProperty(DEST_FILE_PROP, _privateDestFile);
} else {
props.setProperty(DEST_FILE_PROP, DEST_FILE_DEFAULT);
}
if (_publicDestFile != null) {
props.setProperty(PUBLIC_DEST_FILE_PROP, _publicDestFile);
} else {
props.setProperty(PUBLIC_DEST_FILE_PROP, PUBLIC_DEST_FILE_DEFAULT);
}
if (_i2cpHost != null) {
props.setProperty(I2CP_HOST_PROP, _i2cpHost);
} else {
props.setProperty(I2CP_HOST_PROP, I2CP_HOST_DEFAULT);
}
if (_i2cpPort > 0) {
props.setProperty(I2CP_PORT_PROP, "" + _i2cpPort);
} else {
props.setProperty(I2CP_PORT_PROP, "" + I2CP_PORT_DEFAULT);
}
props.setProperty(NUMHOPS_PROP, "" + _numHops);
}
private static final int TYPE_PING = 0;
private static final int TYPE_PONG = 1;
/**
* send a ping message to the peer
*
* @param peer peer to ping
* @param seriesNum id used to keep track of multiple pings (of different size/frequency) to a peer
* @param now current time to be sent in the ping (so we can watch for it in the pong)
* @param size total message size to send
*
* @throws IllegalStateException if we are not connected to the router
*/
public void sendPing(Destination peer, int seriesNum, long now, int size) {
if (_session == null) throw new IllegalStateException("Not connected to the router");
ByteArrayOutputStream baos = new ByteArrayOutputStream(size);
try {
_localDest.writeBytes(baos);
DataHelper.writeLong(baos, 2, seriesNum);
DataHelper.writeLong(baos, 1, TYPE_PING);
DataHelper.writeDate(baos, new Date(now));
int padding = size - baos.size();
byte paddingData[] = new byte[padding];
I2PAppContext.getGlobalContext().random().nextBytes(paddingData);
//Arrays.fill(paddingData, (byte) 0x2A);
DataHelper.writeLong(baos, 2, padding);
baos.write(paddingData);
boolean sent = _session.sendMessage(peer, baos.toByteArray());
if (!sent) {
if (_log.shouldLog(Log.ERROR)) {
_log.error("Error sending the ping to " + peer.calculateHash().toBase64() + " for series "
+ seriesNum);
}
} else {
if (_log.shouldLog(Log.INFO)) {
_log.info("Ping sent to " + peer.calculateHash().toBase64() + " for series " + seriesNum);
}
}
} catch (IOException ioe) {
if (_log.shouldLog(Log.ERROR)) {
_log.error("Error sending the ping", ioe);
}
} catch (DataFormatException dfe) {
if (_log.shouldLog(Log.ERROR)) {
_log.error("Error writing out the ping message", dfe);
}
} catch (I2PSessionException ise) {
if (_log.shouldLog(Log.ERROR)) {
_log.error("Error writing out the ping message", ise);
}
}
}
/**
* send a pong message to the peer
*
* @param peer peer to pong
* @param seriesNum id given to us in the ping
* @param sentOn date the peer said they sent us the message
* @param data payload the peer sent us in the ping
*
* @throws IllegalStateException if we are not connected to the router
*/
public void sendPong(Destination peer, int seriesNum, Date sentOn, byte data[]) {
if (_session == null) throw new IllegalStateException("Not connected to the router");
ByteArrayOutputStream baos = new ByteArrayOutputStream(data.length + 768);
try {
_localDest.writeBytes(baos);
DataHelper.writeLong(baos, 2, seriesNum);
DataHelper.writeLong(baos, 1, TYPE_PONG);
DataHelper.writeDate(baos, sentOn);
DataHelper.writeDate(baos, new Date(Clock.getInstance().now()));
DataHelper.writeLong(baos, 2, data.length);
baos.write(data);
boolean sent = _session.sendMessage(peer, baos.toByteArray());
if (!sent) {
if (_log.shouldLog(Log.ERROR)) {
_log.error("Error sending the pong to " + peer.calculateHash().toBase64() + " for series "
+ seriesNum + " which was sent on " + sentOn);
}
} else {
if (_log.shouldLog(Log.INFO)) {
_log.info("Pong sent to " + peer.calculateHash().toBase64() + " for series " + seriesNum
+ " which was sent on " + sentOn);
}
}
} catch (IOException ioe) {
if (_log.shouldLog(Log.ERROR)) {
_log.error("Error sending the ping", ioe);
}
} catch (DataFormatException dfe) {
if (_log.shouldLog(Log.ERROR)) {
_log.error("Error writing out the pong message", dfe);
}
} catch (I2PSessionException ise) {
if (_log.shouldLog(Log.ERROR)) {
_log.error("Error writing out the pong message", ise);
}
}
}
/**
* We've received this data from I2P - parse it into a ping or a pong
* and notify accordingly
* @param data the data to handle
*/
private void handleMessage(byte data[]) {
ByteArrayInputStream bais = new ByteArrayInputStream(data);
try {
Destination from = new Destination();
from.readBytes(bais);
int series = (int) DataHelper.readLong(bais, 2);
long type = DataHelper.readLong(bais, 1);
Date sentOn = DataHelper.readDate(bais);
Date receivedOn = null;
if (type == TYPE_PONG) {
receivedOn = DataHelper.readDate(bais);
}
int size = (int) DataHelper.readLong(bais, 2);
byte payload[] = new byte[size];
int read = DataHelper.read(bais, payload);
if (read != size) { throw new IOException("Malformed payload - read " + read + " instead of " + size); }
if (_listener == null) {
if (_log.shouldLog(Log.ERROR)) {
_log.error("Listener isn't set, but we received a valid message of type " + type + " sent from "
+ from.calculateHash().toBase64());
}
return;
}
if (type == TYPE_PING) {
if (_log.shouldLog(Log.INFO)) {
_log.info("Ping received from " + from.calculateHash().toBase64() + " on series " + series
+ " sent on " + sentOn + " containing " + size + " bytes");
}
_listener.receivePing(from, series, sentOn, payload);
} else if (type == TYPE_PONG) {
if (_log.shouldLog(Log.INFO)) {
_log.info("Pong received from " + from.calculateHash().toBase64() + " on series " + series
+ " sent on " + sentOn + " with pong sent on " + receivedOn + " containing " + size
+ " bytes");
}
_listener.receivePong(from, series, sentOn, receivedOn, payload);
} else {
throw new IOException("Invalid message type " + type);
}
} catch (IOException ioe) {
if (_log.shouldLog(Log.ERROR)) {
_log.error("Error handling the message", ioe);
}
} catch (DataFormatException dfe) {
if (_log.shouldLog(Log.ERROR)) {
_log.error("Error parsing the message", dfe);
}
}
}
/**
* connect to the I2P router and either authenticate ourselves with the
* destination we're given, or create a new one and write that to the
* destination file.
*
* @return true if we connect successfully, false otherwise
*/
boolean connect() {
I2PClient client = I2PClientFactory.createClient();
Destination us = null;
File destFile = new File(_privateDestFile);
us = verifyDestination(client, destFile);
if (us == null) return false;
// if we're here, we got a destination. lets connect
FileInputStream fin = null;
try {
fin = new FileInputStream(destFile);
Properties options = getOptions();
I2PSession session = client.createSession(fin, options);
I2PListener lsnr = new I2PListener();
session.setSessionListener(lsnr);
session.connect();
_localDest = session.getMyDestination();
if (_log.shouldLog(Log.INFO)) {
_log.info("I2CP Session created and connected as " + _localDest.calculateHash().toBase64());
}
_session = session;
_i2pListener = lsnr;
} catch (I2PSessionException ise) {
if (_log.shouldLog(Log.ERROR)) {
_log.error("Error connecting", ise);
}
return false;
} catch (IOException ioe) {
if (_log.shouldLog(Log.ERROR)) {
_log.error("Error loading the destionation", ioe);
}
return false;
} finally {
if (fin != null) try {
fin.close();
} catch (IOException ioe) {
}
}
return true;
}
/**
* load, verify, or create a destination
*
* @param client the client
* @param destFile the file holding the destination
* @return the destination loaded, or null if there was an error
*/
private Destination verifyDestination(I2PClient client, File destFile) {
Destination us = null;
FileInputStream fin = null;
if (destFile.exists()) {
try {
fin = new FileInputStream(destFile);
us = new Destination();
us.readBytes(fin);
if (_log.shouldLog(Log.INFO)) {
_log.info("Existing destination loaded: [" + us.toBase64() + "]");
}
FileOutputStream fos = null;
try {
fos = new FileOutputStream(_publicDestFile);
fos.write(us.toBase64().getBytes());
fos.flush();
} catch (IOException fioe) {
_log.error("Error writing out the plain destination to [" + _publicDestFile + "]", fioe);
} finally {
if (fos != null) try { fos.close(); } catch (IOException fioe) {}
}
} catch (IOException ioe) {
if (fin != null) try {
fin.close();
} catch (IOException ioe2) {
}
fin = null;
destFile.delete();
us = null;
} catch (DataFormatException dfe) {
if (fin != null) try {
fin.close();
} catch (IOException ioe2) {
}
fin = null;
destFile.delete();
us = null;
} finally {
if (fin != null) try {
fin.close();
} catch (IOException ioe2) {
}
fin = null;
}
}
if (us == null) {
// need to create a new one
FileOutputStream fos = null;
try {
fos = new FileOutputStream(destFile);
us = client.createDestination(fos);
if (_log.shouldLog(Log.INFO)) {
_log.info("New destination created: [" + us.toBase64() + "]");
}
fos.close();
try {
fos = new FileOutputStream(_publicDestFile);
fos.write(us.toBase64().getBytes());
fos.flush();
} catch (IOException fioe) {
_log.error("Error writing out the plain destination to [" + _publicDestFile + "]", fioe);
} finally {
if (fos != null) try { fos.close(); } catch (IOException fioe) {}
fos = null;
}
} catch (IOException ioe) {
if (_log.shouldLog(Log.ERROR)) {
_log.error("Error writing out the destination keys being created", ioe);
}
return null;
} catch (I2PException ie) {
if (_log.shouldLog(Log.ERROR)) {
_log.error("Error creating the destination", ie);
}
return null;
} finally {
if (fos != null) try {
fos.close();
} catch (IOException ioe) {
}
}
}
return us;
}
/**
* I2PSession connect options
* @return the options as Properties
*/
private Properties getOptions() {
Properties props = new Properties();
// this should be BEST_EFFORT, but i'm too lazy to update the code to handle tracking
// sessionTags and sessionKeys, marking them as delivered on pong.
props.setProperty(I2PClient.PROP_RELIABILITY, I2PClient.PROP_RELIABILITY_GUARANTEED);
props.setProperty(I2PClient.PROP_TCP_HOST, _i2cpHost);
props.setProperty(I2PClient.PROP_TCP_PORT, _i2cpPort + "");
props.setProperty("tunnels.depthInbound", "" + _numHops);
props.setProperty("tunnels.depthOutbound", "" + _numHops);
return props;
}
/** disconnect from the I2P router */
void disconnect() {
if (_session != null) {
try {
_session.destroySession();
} catch (I2PSessionException ise) {
if (_log.shouldLog(Log.ERROR)) {
_log.error("Error destroying the session", ise);
}
}
_session = null;
}
}
/**
* Defines an event notification system for receiving pings and pongs
*
*/
public interface PingPongEventListener {
/**
* receive a ping message from the peer
*
* @param from peer that sent us the ping
* @param seriesNum id the peer sent us in the ping
* @param sentOn date the peer said they sent us the message
* @param data payload from the ping
*/
void receivePing(Destination from, int seriesNum, Date sentOn, byte data[]);
/**
* receive a pong message from the peer
*
* @param from peer that sent us the pong
* @param seriesNum id the peer sent us in the pong (that we sent them in the ping)
* @param sentOn when we sent out the ping
* @param replyOn when they sent out the pong
* @param data payload from the ping/pong
*/
void receivePong(Destination from, int seriesNum, Date sentOn, Date replyOn, byte data[]);
}
/**
* Receive data from the session and pass it along to handleMessage for parsing/dispersal
*
*/
private class I2PListener implements I2PSessionListener {
/* (non-Javadoc)
* @see net.i2p.client.I2PSessionListener#disconnected(net.i2p.client.I2PSession)
*/
public void disconnected(I2PSession session) {
if (_log.shouldLog(Log.ERROR)) {
_log.error("Session disconnected");
}
disconnect();
}
/* (non-Javadoc)
* @see net.i2p.client.I2PSessionListener#errorOccurred(net.i2p.client.I2PSession, java.lang.String, java.lang.Throwable)
*/
public void errorOccurred(I2PSession session, String message, Throwable error) {
if (_log.shouldLog(Log.ERROR)) _log.error("Error occurred: " + message, error);
}
/* (non-Javadoc)
* @see net.i2p.client.I2PSessionListener#reportAbuse(net.i2p.client.I2PSession, int)
*/
public void reportAbuse(I2PSession session, int severity) {
if (_log.shouldLog(Log.ERROR)) _log.error("Abuse reported with severity " + String.valueOf(severity));
}
/* (non-Javadoc)
* @see net.i2p.client.I2PSessionListener#messageAvailable(net.i2p.client.I2PSession, int, long)
*/
public void messageAvailable(I2PSession session, int msgId, long size) {
try {
byte data[] = session.receiveMessage(msgId);
handleMessage(data);
} catch (I2PSessionException ise) {
if (_log.shouldLog(Log.ERROR)) _log.error("Error receiving the message", ise);
disconnect();
}
}
}
}

View File

@ -1,412 +0,0 @@
package net.i2p.heartbeat;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashSet;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.TreeMap;
import net.i2p.stat.Rate;
import net.i2p.stat.RateStat;
import net.i2p.util.Clock;
import net.i2p.util.Log;
/**
* Contain the current window of data for a particular series of ping/pong stats
* sent to a peer. This should be periodically kept clean by calling cleanup()
* to timeout expired pings and to drop data outside the window.
*
*/
public class PeerData {
private final static Log _log = new Log(PeerData.class);
/** peer / sequence / config in this data series */
private ClientConfig _peer;
/** date sent (Long) to EventDataPoint containing the datapoints sent in the current period */
private Map _dataPoints;
/** date sent (Long) to EventDataPoint containing pings that haven't yet timed out or been ponged */
private TreeMap _pendingPings;
private long _sessionStart;
private long _lifetimeSent;
private long _lifetimeReceived;
/** rate averaging the time to send over a variety of periods */
private RateStat _sendRate;
/** rate averaging the time to receive over a variety of periods */
private RateStat _receiveRate;
/** rate averaging the frequency of lost messages over a variety of periods */
private RateStat _lostRate;
/** how long we wait before timing out pending pings (30 seconds) */
private static final long TIMEOUT_PERIOD = 60 * 1000;
/** synchronize on this when updating _dataPoints or _pendingPings */
private Object _updateLock = new Object();
/**
* Creates a PeerData . . .
* @param config configuration to load from
*/
public PeerData(ClientConfig config) {
_peer = config;
_dataPoints = new TreeMap();
_pendingPings = new TreeMap();
_sessionStart = Clock.getInstance().now();
_lifetimeSent = 0;
_lifetimeReceived = 0;
_sendRate = new RateStat("sendRate", "How long it takes to send", "peer",
getPeriods(config.getAveragePeriods()));
_receiveRate = new RateStat("receiveRate", "How long it takes to receive", "peer",
getPeriods(config.getAveragePeriods()));
_lostRate = new RateStat("lostRate", "How frequently we lose messages", "peer",
getPeriods(config.getAveragePeriods()));
}
/**
* turn the periods (# minutes) into rate periods (# milliseconds)
* @param periods (in minutes)
* @return an array of periods (in milliseconds)
*/
private static long[] getPeriods(int periods[]) {
long rv[] = null;
if (periods == null) periods = new int[0];
rv = new long[periods.length];
for (int i = 0; i < periods.length; i++)
rv[i] = (long) periods[i] * 60 * 1000; // they're in minutes
Arrays.sort(rv);
return rv;
}
/**
* how many pings are still outstanding?
* @return the number of pings outstanding
*/
public int getPendingCount() {
synchronized (_updateLock) {
return _pendingPings.size();
}
}
/**
* how many data points are available in the current window?
* @return the number of datapoints available
*/
public int getDataPointCount() {
synchronized (_updateLock) {
return _dataPoints.size();
}
}
/**
* when did this test begin?
* @return when the test began
*/
public long getSessionStart() { return _sessionStart; }
/**
* sets when the test began
* @param when when it began
*/
public void setSessionStart(long when) { _sessionStart = when; }
/**
* how many pings have we sent for this test?
* @return the number of pings sent
*/
public long getLifetimeSent() { return _lifetimeSent; }
/**
* how many pongs have we received for this test?
* @return the number of pings received
*/
public long getLifetimeReceived() { return _lifetimeReceived; }
/**
* @return the client configuration
*/
public ClientConfig getConfig() {
return _peer;
}
/**
* What periods are we averaging the data over (in minutes)?
* @return the periods as an array of ints (in minutes)
*/
public int[] getAveragePeriods() {
return (_peer.getAveragePeriods() != null ? _peer.getAveragePeriods() : new int[0]);
}
/**
* average time to send over the given period.
*
* @param period number of minutes to retrieve the average for
* @return milliseconds average, or -1 if we dont track that period
*/
public double getAverageSendTime(int period) {
return getAverage(_sendRate, period);
}
/**
* average time to receive over the given period.
*
* @param period number of minutes to retrieve the average for
* @return milliseconds average, or -1 if we dont track that period
*/
public double getAverageReceiveTime(int period) {
return getAverage(_receiveRate, period);
}
/**
* number of lost messages over the given period.
*
* @param period number of minutes to retrieve the average for
* @return number of lost messages in the period, or -1 if we dont track that period
*/
public double getLostMessages(int period) {
Rate rate = _lostRate.getRate(period * 60 * 1000);
if (rate == null) return -1;
return rate.getCurrentTotalValue();
}
private double getAverage(RateStat stat, int period) {
Rate rate = stat.getRate(period * 60 * 1000);
if (rate == null) return -1;
return rate.getAverageValue();
}
/**
* Return an ordered list of data points in the current window (after doing a cleanup)
*
* @return list of EventDataPoint objects
*/
public List getDataPoints() {
cleanup();
synchronized (_updateLock) {
return new ArrayList(_dataPoints.values());
}
}
/**
* We have sent the peer a ping on this series (using the send time as given)
* @param dateSent when the ping was sent
*/
public void addPing(long dateSent) {
EventDataPoint sent = new EventDataPoint(dateSent);
synchronized (_updateLock) {
_pendingPings.put(new Long(dateSent), sent);
}
_lifetimeSent++;
}
/**
* we have received a pong from the peer on this series
*
* @param dateSent when we sent the ping
* @param pongSent when the peer received the ping and sent the pong
*/
public void pongReceived(long dateSent, long pongSent) {
long now = Clock.getInstance().now();
synchronized (_updateLock) {
if (_pendingPings.size() <= 0) {
_log.warn("Pong received (sent at " + dateSent + ", " + (now-dateSent)
+ "ms ago, pong delay " + (pongSent-dateSent) + "ms, pong receive delay "
+ (now-pongSent) + "ms)");
return;
}
Long first = (Long)_pendingPings.firstKey();
EventDataPoint data = (EventDataPoint)_pendingPings.remove(new Long(dateSent));
if (data != null) {
data.setPongReceived(now);
data.setPongSent(pongSent);
data.setWasPonged(true);
locked_addDataPoint(data);
if (dateSent != first.longValue()) {
_log.error("Out of order delivery: received " + dateSent
+ " but the first pending is " + first.longValue()
+ " (delta " + (dateSent - first.longValue()) + ")");
} else {
_log.info("In order delivery for " + dateSent + " in ping "
+ _peer.getComment());
}
} else {
_log.warn("Pong received, but no matching ping? ping sent at = " + dateSent);
return;
}
}
_sendRate.addData(pongSent - dateSent, 0);
_receiveRate.addData(now - pongSent, 0);
_lifetimeReceived++;
}
protected void addDataPoint(EventDataPoint data) {
synchronized (_updateLock) {
locked_addDataPoint(data);
}
}
private void locked_addDataPoint(EventDataPoint data) {
Object val = _dataPoints.put(new Long(data.getPingSent()), data);
if (val != null) {
if (_log.shouldLog(Log.WARN))
_log.warn("Duplicate data point received: " + data);
}
}
/**
* drop all datapoints outside the window we're watching, and timeout all
* pending pings not ponged in the TIMEOUT_PERIOD, both updating the lost message
* rate and coallescing all of the rates.
*
*/
public void cleanup() {
long dropBefore = Clock.getInstance().now() - _peer.getStatDuration() * 60 * 1000;
long timeoutBefore = Clock.getInstance().now() - TIMEOUT_PERIOD;
long numDropped = 0;
long numTimedOut = 0;
synchronized (_updateLock) {
numDropped = locked_dropExpired(dropBefore);
numTimedOut = locked_timeoutPending(timeoutBefore);
}
_lostRate.addData(numTimedOut, 0);
_receiveRate.coalesceStats();
_sendRate.coalesceStats();
_lostRate.coalesceStats();
if (_log.shouldLog(Log.DEBUG))
_log.debug("Peer data cleaned up " + numTimedOut + " timed out pings and removed " + numDropped
+ " old entries");
}
/**
* Drop all data points that are already too old for us to be interested in
*
* @param when the earliest ping send time we care about
* @return number of data points dropped
*/
private int locked_dropExpired(long when) {
Set toDrop = new HashSet(4);
// drop the failed and really old
for (Iterator iter = _dataPoints.keySet().iterator(); iter.hasNext(); ) {
Long pingTime = (Long)iter.next();
if (pingTime.longValue() < when)
toDrop.add(pingTime);
}
for (Iterator iter = toDrop.iterator(); iter.hasNext(); ) {
_dataPoints.remove(iter.next());
}
return toDrop.size();
}
/**
* timeout and remove all pings that were sent before the given time,
* moving them from the set of pending pings to the set of data points
*
* @param when the earliest ping send time we care about
* @return number of pings timed out
*/
private int locked_timeoutPending(long when) {
Set toDrop = new HashSet(4);
for (Iterator iter = _pendingPings.keySet().iterator(); iter.hasNext(); ) {
Long pingTime = (Long)iter.next();
if (pingTime.longValue() < when) {
toDrop.add(pingTime);
EventDataPoint point = (EventDataPoint)_pendingPings.get(pingTime);
point.setWasPonged(false);
locked_addDataPoint(point);
}
}
for (Iterator iter = toDrop.iterator(); iter.hasNext(); ) {
_pendingPings.remove(iter.next());
}
return toDrop.size();
}
/** actual data point for the peer */
public class EventDataPoint {
private boolean _wasPonged;
private long _pingSent;
private long _pongSent;
private long _pongReceived;
/**
* Creates an EventDataPoint
*/
public EventDataPoint() { this(-1); }
/**
* Creates an EventDataPoint with pingtime associated with it =)
* @param pingSentOn the time a ping was sent
*/
public EventDataPoint(long pingSentOn) {
_wasPonged = false;
_pingSent = pingSentOn;
_pongSent = -1;
_pongReceived = -1;
}
/**
* when did we send this ping?
* @return the time the ping was sent
*/
public long getPingSent() { return _pingSent; }
/**
* sets when we sent this ping
* @param when when we sent the ping
*/
public void setPingSent(long when) { _pingSent = when; }
/**
* when did the peer receive the ping?
* @return the time the ping was receieved
*/
public long getPongSent() {
return _pongSent;
}
/**
* Set the time the peer received the ping
* @param when the time to set
*/
public void setPongSent(long when) {
_pongSent = when;
}
/**
* when did we receive the peer's pong?
* @return the time we receieved the pong
*/
public long getPongReceived() {
return _pongReceived;
}
/**
* Set the time the peer's pong was receieved
* @param when the time to set
*/
public void setPongReceived(long when) {
_pongReceived = when;
}
/**
* did the peer reply in time?
* @return true or false, whether we got a reply in time */
public boolean getWasPonged() {
return _wasPonged;
}
/**
* Set whether we receieved the peer's reply in time
* @param pong true or false
*/
public void setWasPonged(boolean pong) {
_wasPonged = pong;
}
}
}

View File

@ -1,142 +0,0 @@
package net.i2p.heartbeat;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.OutputStream;
import java.text.DecimalFormat;
import java.text.DecimalFormatSymbols;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.Iterator;
import java.util.Locale;
import net.i2p.util.Clock;
import net.i2p.util.Log;
/**
* Actually write out the stats for peer test
*
*/
public class PeerDataWriter {
private final static Log _log = new Log(PeerDataWriter.class);
/**
* persist the peer state to the location specified in the peer config
*
* @param data the peer data to persist
* @return true if it was persisted correctly, false on error
*/
public boolean persist(PeerData data) {
String filename = data.getConfig().getStatFile();
File statFile = new File(filename);
FileOutputStream fos = null;
try {
fos = new FileOutputStream(statFile);
persist(data, fos);
} catch (IOException ioe) {
if (_log.shouldLog(Log.ERROR))
_log.error("Error persisting the peer data for "
+ data.getConfig().getPeer().calculateHash().toBase64(), ioe);
return false;
} finally {
if (fos != null) try {
fos.close();
} catch (IOException ioe) {
}
}
return true;
}
/**
* persists the peer state to the output stream
* @param data the peer data to persist
* @param out where to persist the data
* @return true if it was persisted correctly [always (as implemented)], false on error
* @throws IOException
*/
public boolean persist(PeerData data, OutputStream out) throws IOException {
String header = getHeader(data);
out.write(header.getBytes());
out.write("#action\tstatus\tdate and time sent \tsendMs\treplyMs\troundTrip\n".getBytes());
for (Iterator iter = data.getDataPoints().iterator(); iter.hasNext();) {
PeerData.EventDataPoint point = (PeerData.EventDataPoint) iter.next();
String line = getEvent(point);
out.write(line.getBytes());
}
return true;
}
private String getHeader(PeerData data) {
StringBuffer buf = new StringBuffer(1024);
buf.append("peer \t").append(data.getConfig().getPeer().calculateHash().toBase64()).append('\n');
buf.append("local \t").append(data.getConfig().getUs().calculateHash().toBase64()).append('\n');
buf.append("peerDest \t").append(data.getConfig().getPeer().toBase64()).append('\n');
buf.append("localDest \t").append(data.getConfig().getUs().toBase64()).append('\n');
buf.append("numTunnelHops\t").append(data.getConfig().getNumHops()).append('\n');
buf.append("comment \t").append(data.getConfig().getComment()).append('\n');
buf.append("sendFrequency\t").append(data.getConfig().getSendFrequency()).append('\n');
buf.append("sendSize \t").append(data.getConfig().getSendSize()).append('\n');
buf.append("sessionStart \t").append(getTime(data.getSessionStart())).append('\n');
buf.append("currentTime \t").append(getTime(Clock.getInstance().now())).append('\n');
buf.append("numPending \t").append(data.getPendingCount()).append('\n');
buf.append("lifetimeSent \t").append(data.getLifetimeSent()).append('\n');
buf.append("lifetimeRecv \t").append(data.getLifetimeReceived()).append('\n');
int periods[] = data.getAveragePeriods();
buf.append("#averages\tminutes\tsendMs\trecvMs\tnumLost\troundTrip\n");
for (int i = 0; i < periods.length; i++) {
buf.append("periodAverage\t").append(periods[i]).append('\t');
buf.append(getNum(data.getAverageSendTime(periods[i]))).append('\t');
buf.append(getNum(data.getAverageReceiveTime(periods[i]))).append('\t');
buf.append(getNum(data.getLostMessages(periods[i]))).append('\t');
double rtt = data.getAverageSendTime(periods[i])
+ data.getAverageReceiveTime(periods[i]);
buf.append(getNum(rtt)).append('\n');
}
return buf.toString();
}
private String getEvent(PeerData.EventDataPoint point) {
StringBuffer buf = new StringBuffer(128);
buf.append("EVENT\t");
if (point.getWasPonged())
buf.append("OK\t");
else
buf.append("LOST\t");
buf.append(getTime(point.getPingSent())).append('\t');
if (point.getWasPonged()) {
buf.append(point.getPongSent() - point.getPingSent()).append('\t');
buf.append(point.getPongReceived() - point.getPongSent()).append('\t');
buf.append(point.getPongReceived() - point.getPingSent()).append('\t');
}
buf.append('\n');
return buf.toString();
}
private final SimpleDateFormat _fmt = new SimpleDateFormat("yyyyMMdd.HH:mm:ss.SSS", Locale.UK);
/**
* Converts a time (long) to text
* @param when the time to convert
* @return the textual representation
*/
public String getTime(long when) {
synchronized (_fmt) {
return _fmt.format(new Date(when));
}
}
private final DecimalFormat _numFmt = new DecimalFormat("#0", new DecimalFormatSymbols(Locale.UK));
/**
* Converts a number (double) to text
* @param val the number to convert
* @return the textual representation
*/
public String getNum(double val) {
synchronized (_numFmt) {
return _numFmt.format(val);
}
}
}

View File

@ -1,108 +0,0 @@
package net.i2p.heartbeat.gui;
import java.awt.BorderLayout;
import java.awt.Color;
import java.util.ArrayList;
import java.util.List;
import javax.swing.JPanel;
import javax.swing.JScrollPane;
import javax.swing.JTabbedPane;
import net.i2p.util.Log;
/**
* Render the control widgets (refresh/load/snapshot and the
* tabbed panel with the plot config data)
*
*/
class HeartbeatControlPane extends JPanel {
private final static Log _log = new Log(HeartbeatControlPane.class);
private HeartbeatMonitorGUI _gui;
private JTabbedPane _configPane;
private final static Color WHITE = new Color(255, 255, 255);
private final static Color LIGHT_BLUE = new Color(180, 180, 255); /* UNUSED */
private final static Color BLACK = new Color(0, 0, 0);
private Color _background = WHITE;
private Color _foreground = BLACK;
/**
* Constructs a control panel onto the gui
* @param gui the gui the panel is associated with
*/
public HeartbeatControlPane(HeartbeatMonitorGUI gui) {
_gui = gui;
initializeComponents();
}
/**
* Adds a test to the panel
* @param config the configuration for the test
*/
public void addTest(PeerPlotConfig config) {
_configPane.addTab(config.getTitle(), null, new JScrollPane(new PeerPlotConfigPane(config, this)), config.getSummary());
_configPane.setBackgroundAt(_configPane.getTabCount()-1, _background);
_configPane.setForegroundAt(_configPane.getTabCount()-1, _foreground);
_gui.pack();
}
/**
* Removes a test from the panel
* @param config the configuration for the test
*/
public void removeTest(PeerPlotConfig config) {
_gui.getMonitor().getState().removeTest(config);
int index = _configPane.indexOfTab(config.getTitle());
if (index >= 0)
_configPane.removeTabAt(index);
}
/**
* Callback: when tests have changed
*/
public void testsUpdated() {
List knownNames = new ArrayList(8);
for (int i = 0; i < _gui.getMonitor().getState().getTestCount(); i++) {
PeerPlotState state = _gui.getMonitor().getState().getTest(i);
String title = state.getPlotConfig().getTitle();
knownNames.add(state.getPlotConfig().getTitle());
if (_configPane.indexOfTab(title) >= 0) {
_log.debug("We already know about [" + title + "]");
} else {
_log.info("The test [" + title + "] is new to us");
PeerPlotConfigPane pane = new PeerPlotConfigPane(state.getPlotConfig(), this);
_configPane.addTab(state.getPlotConfig().getTitle(), null, new JScrollPane(pane), state.getPlotConfig().getSummary());
_configPane.setBackgroundAt(_configPane.getTabCount()-1, _background);
_configPane.setForegroundAt(_configPane.getTabCount()-1, _foreground);
}
}
List toRemove = new ArrayList(4);
for (int i = 0; i < _configPane.getTabCount(); i++) {
if (knownNames.contains(_configPane.getTitleAt(i))) {
// noop
} else {
toRemove.add(_configPane.getTitleAt(i));
}
}
for (int i = 0; i < toRemove.size(); i++) {
String title = (String)toRemove.get(i);
_log.info("Removing test [" + title + "]");
_configPane.removeTabAt(_configPane.indexOfTab(title));
}
}
private void initializeComponents() {
if (_gui != null)
setBackground(_gui.getBackground());
else
setBackground(_background);
setLayout(new BorderLayout());
HeartbeatMonitorCommandBar bar = new HeartbeatMonitorCommandBar(_gui);
bar.setBackground(getBackground());
add(bar, BorderLayout.NORTH);
_configPane = new JTabbedPane(JTabbedPane.LEFT);
_configPane.setBackground(_background);
//add(_configPane, BorderLayout.CENTER);
add(_configPane, BorderLayout.SOUTH);
}
}

View File

@ -1,116 +0,0 @@
package net.i2p.heartbeat.gui;
import net.i2p.util.I2PThread;
import net.i2p.util.Log;
/**
* The HeartbeatMonitor, complete with main()! Act now, and it's only 5 easy
* payments of $19.95 (plus shipping and handling)! You heard me, only _5_
* easy payments of $19.95 (plus shipping and handling)! <p />
*
* (fine print: something about some states in the US requiring the addition
* of sales tax... or something) <p />
*
* (finer print: Satan owns you. Deal with it.) <p />
*
* (even finer print: usage: <code>HeartbeatMonitor [configFilename]</code>)
*/
public class HeartbeatMonitor implements PeerPlotStateFetcher.FetchStateReceptor, PeerPlotConfig.UpdateListener {
private final static Log _log = new Log(HeartbeatMonitor.class);
private HeartbeatMonitorState _state;
private HeartbeatMonitorGUI _gui;
/**
* Delegating constructor.
* @see HeartbeatMonitor#HeartbeatMonitor(String)
*/
public HeartbeatMonitor() { this(null); }
/**
* Creates a HeartbeatMonitor . . .
* @param configFilename the configuration file to read from
*/
public HeartbeatMonitor(String configFilename) {
_state = new HeartbeatMonitorState(configFilename);
_gui = new HeartbeatMonitorGUI(this);
}
/**
* Starts the game rollin'
*/
public void runMonitor() {
loadConfig();
I2PThread t = new I2PThread(new HeartbeatMonitorRunner(this));
t.setName("HeartbeatMonitor");
t.setDaemon(false);
t.start();
_log.debug("Monitor started");
}
/**
* give us all the data/config available
* @return the current state (data/config)
*/
HeartbeatMonitorState getState() {
return _state;
}
/** for all of the peer tests being monitored, refetch the data and rerender */
void refetchData() {
_log.debug("Refetching data");
for (int i = 0; i < _state.getTestCount(); i++)
PeerPlotStateFetcher.fetchPeerPlotState(this, _state.getTest(i));
}
/** (re)load the config defining what peer tests we are monitoring (and how to render) */
void loadConfig() {
//for (int i = 0; i < 10; i++) {
// load("fake" + i);
//}
}
/**
* Loads config data
* @param location the name of the location to load data from
*/
public void load(String location) {
PeerPlotConfig cfg = new PeerPlotConfig(location);
cfg.addListener(this);
PeerPlotState state = new PeerPlotState(cfg);
PeerPlotStateFetcher.fetchPeerPlotState(this, state);
}
/* (non-Javadoc)
* @see PeerPlotStateFetcher.FetchStateReceptor#peerPlotStateFetched
*/
public synchronized void peerPlotStateFetched(PeerPlotState state) {
_state.addTest(state);
_gui.stateUpdated();
}
/**
* store the config defining what peer tests we are monitoring (and how to render)
*/
void storeConfig() {}
/**
* And now, the main function, the one you've all been waiting for! . . .
* @param args da args. Should take 1, which is the location to load config data from
*/
public static void main(String args[]) {
Thread.currentThread().setName("HeartbeatMonitor.main");
if (args.length == 1)
new HeartbeatMonitor(args[0]).runMonitor();
else
new HeartbeatMonitor().runMonitor();
}
/**
* Called when the config is updated
* @param config the updated config
*/
public void configUpdated(PeerPlotConfig config) {
_log.debug("Config updated, revamping the gui");
_gui.stateUpdated();
}
}

View File

@ -1,67 +0,0 @@
package net.i2p.heartbeat.gui;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.awt.event.ItemEvent;
import java.awt.event.ItemListener;
import javax.swing.DefaultComboBoxModel;
import javax.swing.JButton;
import javax.swing.JComboBox;
import javax.swing.JFileChooser;
import javax.swing.JLabel;
import javax.swing.JPanel;
import javax.swing.JTextField;
class HeartbeatMonitorCommandBar extends JPanel {
private HeartbeatMonitorGUI _gui;
private JComboBox _refreshRate;
private JTextField _location;
/**
* Constructs a command bar onto the gui
* @param gui the gui the command bar is associated with
*/
public HeartbeatMonitorCommandBar(HeartbeatMonitorGUI gui) {
_gui = gui;
initializeComponents();
}
private void refreshChanged(ItemEvent evt) {}
private void loadCalled() {
_gui.getMonitor().load(_location.getText());
}
private void browseCalled() {
JFileChooser chooser = new JFileChooser(_location.getText());
chooser.setBackground(_gui.getBackground());
chooser.setMultiSelectionEnabled(false);
int rv = chooser.showDialog(this, "Load");
if (rv == JFileChooser.APPROVE_OPTION)
_gui.getMonitor().load(chooser.getSelectedFile().getAbsolutePath());
}
private void initializeComponents() {
_refreshRate = new JComboBox(new DefaultComboBoxModel(new Object[] {"10 second refresh", "30 second refresh", "1 minute refresh", "5 minute refresh"}));
_refreshRate.addItemListener(new ItemListener() { public void itemStateChanged(ItemEvent evt) { refreshChanged(evt); } });
_refreshRate.setEnabled(false);
_refreshRate.setBackground(_gui.getBackground());
//add(_refreshRate);
JLabel loadLabel = new JLabel("Load from: ");
loadLabel.setBackground(_gui.getBackground());
add(loadLabel);
_location = new JTextField(20);
_location.setToolTipText("Either specify a local filename or a fully qualified URL");
_location.setBackground(_gui.getBackground());
add(_location);
JButton browse = new JButton("Browse...");
browse.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent evt) { browseCalled(); } });
browse.setBackground(_gui.getBackground());
add(browse);
JButton load = new JButton("Load");
load.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent evt) { loadCalled(); } });
load.setBackground(_gui.getBackground());
add(load);
setBackground(_gui.getBackground());
}
}

View File

@ -1,98 +0,0 @@
package net.i2p.heartbeat.gui;
import java.awt.BorderLayout;
import java.awt.Color;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import javax.swing.JFrame;
import javax.swing.JMenu;
import javax.swing.JMenuBar;
import javax.swing.JMenuItem;
import javax.swing.JScrollPane;
class HeartbeatMonitorGUI extends JFrame {
private HeartbeatMonitor _monitor;
private HeartbeatPlotPane _plotPane;
private HeartbeatControlPane _controlPane;
private final static Color WHITE = new Color(255, 255, 255);
private Color _background = WHITE;
/**
* Creates the GUI for all youz who be too shoopid for text based shitz
* @param monitor the monitor the gui operates over
*/
public HeartbeatMonitorGUI(HeartbeatMonitor monitor) {
super("Heartbeat Monitor");
_monitor = monitor;
initializeComponents();
pack();
//setResizable(false);
setVisible(true);
}
HeartbeatMonitor getMonitor() { return _monitor; }
/** build up all our widgets */
private void initializeComponents() {
getContentPane().setLayout(new BorderLayout());
setBackground(_background);
_plotPane = new JFreeChartHeartbeatPlotPane(this); // new HeartbeatPlotPane(this);
_plotPane.setBackground(_background);
//JScrollPane pane = new JScrollPane(_plotPane);
//pane.setBackground(_background);
getContentPane().add(new JScrollPane(_plotPane), BorderLayout.CENTER);
_controlPane = new HeartbeatControlPane(this);
_controlPane.setBackground(_background);
getContentPane().add(_controlPane, BorderLayout.SOUTH);
//JSplitPane pane = new JSplitPane(JSplitPane.VERTICAL_SPLIT, new JScrollPane(_plotPane), new JScrollPane(_controlPane));
//getContentPane().add(pane, BorderLayout.CENTER);
setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
initializeMenus();
}
/**
* Callback: when the state of the world changes . . .
*/
public void stateUpdated() {
_controlPane.testsUpdated();
_plotPane.stateUpdated();
}
private void exitCalled() {
_monitor.getState().setWasKilled(true);
setVisible(false);
System.exit(0);
}
private void loadConfigCalled() {}
private void saveConfigCalled() {}
private void loadSnapshotCalled() {}
private void saveSnapshotCalled() {}
private void initializeMenus() {
JMenuBar bar = new JMenuBar();
JMenu fileMenu = new JMenu("File");
JMenuItem loadConfig = new JMenuItem("Load config");
loadConfig.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent evt) { loadConfigCalled(); } });
JMenuItem saveConfig = new JMenuItem("Save config");
saveConfig.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent evt) { saveConfigCalled(); } });
JMenuItem saveSnapshot = new JMenuItem("Save snapshot");
saveSnapshot.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent evt) { saveSnapshotCalled(); } });
JMenuItem loadSnapshot = new JMenuItem("Load snapshot");
loadSnapshot.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent evt) { loadSnapshotCalled(); } });
JMenuItem exit = new JMenuItem("Exit");
exit.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent evt) { exitCalled(); } });
fileMenu.add(loadConfig);
fileMenu.add(saveConfig);
fileMenu.add(loadSnapshot);
fileMenu.add(saveSnapshot);
fileMenu.add(exit);
bar.add(fileMenu);
setJMenuBar(bar);
}
}

View File

@ -1,32 +0,0 @@
package net.i2p.heartbeat.gui;
import net.i2p.util.Log;
/**
* Periodically fire off necessary events (instructing the heartbeat monitor when
* to refetch the data, etc). This is the only active thread in the heartbeat
* monitor (outside the swing/jvm threads)
*/
class HeartbeatMonitorRunner implements Runnable {
private final static Log _log = new Log(HeartbeatMonitorRunner.class);
private HeartbeatMonitor _monitor;
/**
* Creates the thread . . .
* @param monitor the monitor the thread runs over
*/
public HeartbeatMonitorRunner(HeartbeatMonitor monitor) {
_monitor = monitor;
}
/* (non-Javadoc)
* @see java.lang.Runnable#run()
*/
public void run() {
while (!_monitor.getState().getWasKilled()) {
_monitor.refetchData();
try { Thread.sleep(_monitor.getState().getRefreshRateMs()); } catch (InterruptedException ie) {}
}
_log.info("Stopping the heartbeat monitor runner");
}
}

View File

@ -1,129 +0,0 @@
package net.i2p.heartbeat.gui;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
/**
* manage the current state of the GUI - all data points, as well as any
* rendering or configuration options.
*/
class HeartbeatMonitorState {
private String _configFile;
private List _peerPlotState;
private int _currentPeerPlotConfig;
private int _refreshRateMs;
private boolean _killed;
/** by default, refresh every 30 seconds */
private final static int DEFAULT_REFRESH_RATE = 30*1000;
/** where do we load/store config info from? */
private final static String DEFAULT_CONFIG_FILE = "heartbeatMonitor.config";
/**
* A delegating constructor.
* @see HeartbeatMonitorState#HeartbeatMonitorState(String)
*/
public HeartbeatMonitorState() { this(DEFAULT_CONFIG_FILE); }
/**
* Constructs the state, loading from the specified location
* @param configFile the name of the file to load info from
*/
public HeartbeatMonitorState(String configFile) {
_peerPlotState = Collections.synchronizedList(new ArrayList());
_refreshRateMs = DEFAULT_REFRESH_RATE;
_configFile = configFile;
_killed = false;
_currentPeerPlotConfig = 0;
}
/**
* how many tests are we monitoring?
* @return the number of tests
*/
public int getTestCount() { return _peerPlotState.size(); }
/**
* Retrieves the current info of a test for a certain peer . . .
* @param peer a number associated with a certain peer
* @return the test data
*/
public PeerPlotState getTest(int peer) { return (PeerPlotState)_peerPlotState.get(peer); }
/**
* Adds a test . . .
* @param peerState the test (by state) to add . . .
*/
public void addTest(PeerPlotState peerState) {
if (!_peerPlotState.contains(peerState))
_peerPlotState.add(peerState);
}
/**
* Removes a test . . .
* @param peerState the test (by state) to remove . . .
*/
public void removeTest(PeerPlotState peerState) { _peerPlotState.remove(peerState); }
/**
* Removes a test . . .
* @param peerConfig the test (by config) to remove . . .
*/
public void removeTest(PeerPlotConfig peerConfig) {
for (int i = 0; i < getTestCount(); i++) {
PeerPlotState state = getTest(i);
if (state.getPlotConfig() == peerConfig) {
removeTest(state);
return;
}
}
}
/**
* which of the tests are we currently editing/viewing?
* @return the number associated with the test
*/
public int getPeerPlotConfig() { return _currentPeerPlotConfig; }
/**
* Sets the test we are currently editting/viewing
* @param whichTest the number associated with the test
*/
public void setPeerPlotConfig(int whichTest) { _currentPeerPlotConfig = whichTest; }
/**
* how frequently should we update the data?
* @return the current frequency (in milliseconds)
*/
public int getRefreshRateMs() { return _refreshRateMs; }
/**
* Sets how frequently we should update data
* @param ms the frequency (in milliseconds)
*/
public void setRefreshRateMs(int ms) { _refreshRateMs = ms; }
/**
* where is our config stored?
* @return the name of the config file
*/
public String getConfigFile() { return _configFile; }
/**
* Sets where our config is stored
* @param filename the name of the config file
*/
public void setConfigFile(String filename) { _configFile = filename; }
/**
* have we been shut down?
* @return true if we have, false otherwise
*/
public boolean getWasKilled() { return _killed; }
/**
* Sets if we have been shutdown or not
* @param killed true if we've been shutdown, false otherwise
*/
public void setWasKilled(boolean killed) { _killed = killed; }
}

View File

@ -1,62 +0,0 @@
package net.i2p.heartbeat.gui;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import javax.swing.JPanel;
import javax.swing.JTextArea;
import net.i2p.heartbeat.PeerDataWriter;
import net.i2p.util.Log;
/**
* Render the graph and legend
*/
class HeartbeatPlotPane extends JPanel {
private final static Log _log = new Log(HeartbeatPlotPane.class);
protected HeartbeatMonitorGUI _gui;
private JTextArea _text;
/**
* Constructs the plot pane
* @param gui the gui the pane is attached to
*/
public HeartbeatPlotPane(HeartbeatMonitorGUI gui) {
_gui = gui;
initializeComponents();
}
/**
* Callback: when things change . . .
*/
public void stateUpdated() {
StringBuffer buf = new StringBuffer(32*1024);
PeerDataWriter writer = new PeerDataWriter();
for (int i = 0; i < _gui.getMonitor().getState().getTestCount(); i++) {
StaticPeerData data = _gui.getMonitor().getState().getTest(i).getCurrentData();
ByteArrayOutputStream baos = new ByteArrayOutputStream(4096);
try {
writer.persist(data, baos);
} catch (IOException ioe) {
_log.error("wtf, error writing to a byte array?", ioe);
}
buf.append(new String(baos.toByteArray())).append("\n\n\n");
}
_text.setText(buf.toString());
}
protected void initializeComponents() {
setBackground(_gui.getBackground());
//Dimension size = new Dimension(800, 600);
_text = new JTextArea("",30,80); // 16, 60);
_text.setAutoscrolls(true);
_text.setEditable(false);
// _text.setLineWrap(true);
// add(new JScrollPane(_text));
add(_text);
// add(new JScrollPane(_text, JScrollPane.VERTICAL_SCROLLBAR_ALWAYS, JScrollPane.HORIZONTAL_SCROLLBAR_ALWAYS));
// setPreferredSize(size);
}
}

View File

@ -1,233 +0,0 @@
package net.i2p.heartbeat.gui;
import java.awt.Color;
import java.awt.Font;
import java.util.List;
import org.jfree.chart.ChartPanel;
import org.jfree.chart.JFreeChart;
import org.jfree.chart.axis.DateAxis;
import org.jfree.chart.axis.NumberAxis;
import org.jfree.chart.plot.Plot;
import org.jfree.chart.plot.XYPlot;
import org.jfree.chart.renderer.XYItemRenderer;
import org.jfree.chart.renderer.XYLineAndShapeRenderer;
import org.jfree.data.XYSeries;
import org.jfree.data.XYSeriesCollection;
import net.i2p.heartbeat.PeerData;
import net.i2p.util.Log;
class JFreeChartAdapter {
private final static Log _log = new Log(JFreeChartAdapter.class); /* UNUSED */
private final static Color WHITE = new Color(255, 255, 255);
ChartPanel createPanel(HeartbeatMonitorState state) {
ChartPanel panel = new ChartPanel(createChart(state));
panel.setDisplayToolTips(true);
panel.setEnforceFileExtensions(true);
panel.setHorizontalZoom(true);
panel.setVerticalZoom(true);
panel.setMouseZoomable(true, true);
panel.getChart().setBackgroundPaint(WHITE);
return panel;
}
JFreeChart createChart(HeartbeatMonitorState state) {
Plot plot = createPlot(state);
JFreeChart chart = new JFreeChart("I2P Heartbeat performance", Font.getFont("arial"), plot, true);
return chart;
}
void updateChart(ChartPanel panel, HeartbeatMonitorState state) {
XYPlot plot = (XYPlot)panel.getChart().getPlot();
plot.setDataset(getCollection(state));
updateLines(plot, state);
}
private long getFirst(HeartbeatMonitorState state) { /* UNUSED */
long first = -1;
for (int i = 0; i < state.getTestCount(); i++) {
List dataPoints = state.getTest(i).getCurrentData().getDataPoints();
if ( (dataPoints != null) && (dataPoints.size() > 0) ) {
PeerData.EventDataPoint data = (PeerData.EventDataPoint)dataPoints.get(0);
if ( (first < 0) || (first > data.getPingSent()) )
first = data.getPingSent();
}
}
return first;
}
Plot createPlot(HeartbeatMonitorState state) {
XYItemRenderer renderer = new XYLineAndShapeRenderer(); // new XYDotRenderer(); //
XYPlot plot = new XYPlot(getCollection(state), new DateAxis(), new NumberAxis("ms"), renderer);
updateLines(plot, state);
return plot;
}
private void updateLines(XYPlot plot, HeartbeatMonitorState state) {
if (true) return;
if (state == null) return;
for (int i = 0; i < state.getTestCount(); i++) {
PeerPlotConfig config = state.getTest(i).getPlotConfig();
PeerPlotConfig.PlotSeriesConfig curConfig = config.getCurrentSeriesConfig();
XYSeriesCollection col = ((XYSeriesCollection)plot.getDataset());
for (int j = 0; j < col.getSeriesCount(); j++) {
//XYItemRenderer renderer = plot.getRendererForDataset(col.getSeries(j));
XYItemRenderer renderer = plot.getRendererForDataset(col);
if (col.getSeriesName(j).startsWith(config.getTitle() + " send")) {
if (curConfig.getPlotSendTime()) {
//renderer.setPaint(curConfig.getPlotLineColor());
}
}
if (col.getSeriesName(j).startsWith(config.getTitle() + " receive")) {
if (curConfig.getPlotReceiveTime()) {
//renderer.setPaint(curConfig.getPlotLineColor());
}
}
if (col.getSeriesName(j).startsWith(config.getTitle() + " lost")) {
if (curConfig.getPlotLostMessages()) {
//renderer.setPaint(curConfig.getPlotLineColor());
}
}
}
}
}
XYSeriesCollection getCollection(HeartbeatMonitorState state) {
XYSeriesCollection col = new XYSeriesCollection();
if (state != null) {
for (int i = 0; i < state.getTestCount(); i++) {
addTest(col, state.getTest(i));
}
} else {
XYSeries series = new XYSeries("latency", false, false);
series.add(System.currentTimeMillis(), 0);
col.addSeries(series);
}
return col;
}
void addTest(XYSeriesCollection col, PeerPlotState state) {
PeerPlotConfig config = state.getPlotConfig();
PeerPlotConfig.PlotSeriesConfig curConfig = config.getCurrentSeriesConfig();
addLines(col, curConfig, config.getTitle(), state.getCurrentData().getDataPoints());
addAverageLines(col, config, config.getTitle(), state.getCurrentData());
}
/**
* @param col the collection of xy series to add to
* @param config preferences for how to display this test
* @param lineName minimal name of the test (e.g. "jxHa.32KB.60s")
* @param data List of PeerData.EventDataPoint describing all of the events in the test
*/
void addLines(XYSeriesCollection col, PeerPlotConfig.PlotSeriesConfig config, String lineName, List data) {
if (config.getPlotSendTime()) {
XYSeries sendSeries = getSendSeries(data);
sendSeries.setName(lineName + " send");
sendSeries.setDescription("milliseconds for the ping to reach the peer");
col.addSeries(sendSeries);
}
if (config.getPlotReceiveTime()) {
XYSeries recvSeries = getReceiveSeries(data);
recvSeries.setName(lineName + " receive");
recvSeries.setDescription("milliseconds for the peer's pong to reach the sender");
col.addSeries(recvSeries);
}
if (config.getPlotLostMessages()) {
XYSeries lostSeries = getLostSeries(data);
lostSeries.setName(lineName + " lost");
lostSeries.setDescription("number of ping/pong messages lost");
col.addSeries(lostSeries);
}
}
/**
* Add a data series for each average that we're configured to render
*
* @param col the collection of xy series to add to
* @param config preferences for how to display this test
* @param lineName minimal name of the test (e.g. "jxHa.32KB.60s")
* @param data List of PeerData.EventDataPoint describing all of the events in the test
*/
void addAverageLines(XYSeriesCollection col, PeerPlotConfig config, String lineName, StaticPeerData data) {
if (data.getDataPointCount() <= 0) return;
PeerData.EventDataPoint start = (PeerData.EventDataPoint)data.getDataPoints().get(0);
PeerData.EventDataPoint finish = (PeerData.EventDataPoint)data.getDataPoints().get(data.getDataPointCount()-1);
List configs = config.getAverageSeriesConfigs();
for (int i = 0; i < configs.size(); i++) {
PeerPlotConfig.PlotSeriesConfig cfg = (PeerPlotConfig.PlotSeriesConfig)configs.get(i);
int minutes = (int)cfg.getPeriod()/(60*1000);
if (cfg.getPlotSendTime()) {
double time = data.getAverageSendTime(minutes);
if (time > 0) {
XYSeries series = new XYSeries(lineName + " send " + minutes + "m avg [" + time + "]", false, false);
series.add(start.getPingSent(), time);
series.add(finish.getPingSent(), time);
series.setDescription("send time, averaged over the last " + minutes + " minutes");
col.addSeries(series);
}
}
if (cfg.getPlotReceiveTime()) {
double time = data.getAverageReceiveTime(minutes);
if (time > 0) {
XYSeries series = new XYSeries(lineName + " receive " + minutes + "m avg[" + time + "]", false, false);
series.add(start.getPingSent(), time);
series.add(finish.getPingSent(), time);
series.setDescription("receive time, averaged over the last " + minutes + " minutes");
col.addSeries(series);
}
}
if (cfg.getPlotLostMessages()) {
double num = data.getLostMessages(minutes);
if (num > 0) {
XYSeries series = new XYSeries(lineName + " lost messages (" + num + " in " + minutes + "m)", false, false);
series.add(start.getPingSent(), num);
series.add(finish.getPingSent(), num);
series.setDescription("number of messages lost in the last " + minutes + " minutes");
col.addSeries(series);
}
}
}
}
XYSeries getSendSeries(List data) {
XYSeries series = new XYSeries("sent", false, false);
for (int i = 0; i < data.size(); i++) {
PeerData.EventDataPoint point = (PeerData.EventDataPoint)data.get(i);
if (point.getWasPonged()) {
series.add(point.getPingSent(), point.getPongSent()-point.getPingSent());
} else {
// series.add(data.getPingSent(), 0);
}
}
return series;
}
XYSeries getReceiveSeries(List data) {
XYSeries series = new XYSeries("receive", false, false);
for (int i = 0; i < data.size(); i++) {
PeerData.EventDataPoint point = (PeerData.EventDataPoint)data.get(i);
if (point.getWasPonged()) {
series.add(point.getPingSent(), point.getPongReceived()-point.getPongSent());
} else {
// series.add(data.getPingSent(), 0);
}
}
return series;
}
XYSeries getLostSeries(List data) {
XYSeries series = new XYSeries("lost", false, false);
for (int i = 0; i < data.size(); i++) {
PeerData.EventDataPoint point = (PeerData.EventDataPoint)data.get(i);
if (point.getWasPonged()) {
//series.add(point.getPingSent(), 0);
} else {
series.add(point.getPingSent(), 1);
}
}
return series;
}
}

View File

@ -1,58 +0,0 @@
package net.i2p.heartbeat.gui;
import java.awt.BorderLayout;
import javax.swing.JLabel;
import javax.swing.JScrollPane;
import org.jfree.chart.ChartPanel;
import net.i2p.util.Log;
/**
* Render the graph and legend
*
*/
class JFreeChartHeartbeatPlotPane extends HeartbeatPlotPane {
private final static Log _log = new Log(JFreeChartHeartbeatPlotPane.class); /* UNUSED */
private ChartPanel _panel;
private JFreeChartAdapter _adapter;
/**
* Creates a JFreeChart plot pane for the given gui
* @param gui the heartbeat monitor gui
*/
public JFreeChartHeartbeatPlotPane(HeartbeatMonitorGUI gui) {
super(gui);
}
/**
* Called when the state is updated
*/
public void stateUpdated() {
if (_panel == null) {
remove(0); // remove the dummy
_adapter = new JFreeChartAdapter();
_panel = _adapter.createPanel(_gui.getMonitor().getState());
_panel.setBackground(_gui.getBackground());
add(new JScrollPane(_panel), BorderLayout.CENTER);
_gui.pack();
} else {
_adapter.updateChart(_panel, _gui.getMonitor().getState());
//_gui.pack();
}
}
protected void initializeComponents() {
// noop
setLayout(new BorderLayout());
add(new JLabel(), BorderLayout.CENTER);
//dummy.setBackground(_gui.getBackground());
//dummy.setPreferredSize(new Dimension(800,600));
//add(dummy);
//add(_panel);
}
}

View File

@ -1,367 +0,0 @@
package net.i2p.heartbeat.gui;
import java.awt.Color;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashSet;
import java.util.Iterator;
import java.util.List;
import java.util.Set;
import java.util.TreeMap;
import net.i2p.data.Destination;
import net.i2p.heartbeat.ClientConfig;
import net.i2p.util.Log;
/**
* Configure how we want to render a particular clientConfig in the GUI
*/
class PeerPlotConfig {
private final static Log _log = new Log(PeerPlotConfig.class); /* UNUSED */
/** where can we find the current state/data (either as a filename or a URL)? */
private String _location;
/** what test are we defining the plot data for? */
private ClientConfig _config;
/** how should we render the current data set? */
private PlotSeriesConfig _currentSeriesConfig;
/** how should we render the various averages available? */
private List _averageSeriesConfigs;
private Set _listeners;
private boolean _disabled;
/**
* Delegating constructor . . .
* @param location the name of the file/URL to get the data from
*/
public PeerPlotConfig(String location) {
this(location, null, null, null);
}
/**
* Constructs a config =)
* @param location the location of the file/URL to get the data from
* @param config the client's configuration
* @param currentSeriesConfig the series config
* @param averageSeriesConfigs the average
*/
public PeerPlotConfig(String location, ClientConfig config, PlotSeriesConfig currentSeriesConfig, List averageSeriesConfigs) {
_location = location;
if (config == null)
config = new ClientConfig(location);
_config = config;
if (currentSeriesConfig != null)
_currentSeriesConfig = currentSeriesConfig;
else
_currentSeriesConfig = new PlotSeriesConfig(0);
if (averageSeriesConfigs != null) {
_averageSeriesConfigs = averageSeriesConfigs;
} else {
rebuildAverageSeriesConfigs();
}
_listeners = Collections.synchronizedSet(new HashSet(2));
_disabled = false;
}
/**
* 'Rebuilds' the average series stuff from the client configuration
*/
public void rebuildAverageSeriesConfigs() {
int periods[] = _config.getAveragePeriods();
if (periods == null) {
_averageSeriesConfigs = Collections.synchronizedList(new ArrayList(0));
} else {
Arrays.sort(periods);
_averageSeriesConfigs = Collections.synchronizedList(new ArrayList(periods.length));
for (int i = 0; i < periods.length; i++) {
_averageSeriesConfigs.add(new PlotSeriesConfig(periods[i]*60*1000));
}
}
}
/**
* Adds an average period
* @param minutes the number of minutes averaged over
*/
public void addAverage(int minutes) {
_config.addAveragePeriod(minutes);
TreeMap ordered = new TreeMap();
for (int i = 0; i < _averageSeriesConfigs.size(); i++) {
PlotSeriesConfig cfg = (PlotSeriesConfig)_averageSeriesConfigs.get(i);
ordered.put(new Long(cfg.getPeriod()), cfg);
}
Long period = new Long(minutes*60*1000);
if (!ordered.containsKey(period))
ordered.put(period, new PlotSeriesConfig(minutes*60*1000));
List cfgs = Collections.synchronizedList(new ArrayList(ordered.size()));
for (Iterator iter = ordered.values().iterator(); iter.hasNext(); )
cfgs.add(iter.next());
_averageSeriesConfigs = cfgs;
}
/**
* Where is the current state data supposed to be found? This must either be a
* local file path or a URL
* @return the current location
*/
public String getLocation() { return _location; }
/**
* The location the current state data is supposed to be found. This must either be
* a local file path or a URL
* @param location the location
*/
public void setLocation(String location) {
_location = location;
fireUpdate();
}
/**
* What are we configuring?
* @return the client configuration
*/
public ClientConfig getClientConfig() { return _config; }
/**
* Sets what we are currently configuring
* @param config the new config
*/
public void setClientConfig(ClientConfig config) {
_config = config;
fireUpdate();
}
/**
* How do we want to render the current data set?
* @return the way we currently render the data
*/
public PlotSeriesConfig getCurrentSeriesConfig() { return _currentSeriesConfig; }
/**
* Sets how we want to render the current data set.
* @param config the new config
*/
public void setCurrentSeriesConfig(PlotSeriesConfig config) {
_currentSeriesConfig = config;
fireUpdate();
}
/**
* How do we want to render the averages?
* @return the way we currently render the averages
*/
public List getAverageSeriesConfigs() { return _averageSeriesConfigs; }
/**
* Sets how we want to render the averages
* @param configs the new configs
*/
public void setAverageSeriesConfigs(List configs) { _averageSeriesConfigs = configs; }
/**
* four char description of the peer
* @return the name
*/
public String getPeerName() {
Destination peer = getClientConfig().getPeer();
if (peer == null)
return "????";
return peer.calculateHash().toBase64().substring(0, 4);
}
/**
* title: name.packetsize.sendfrequency
* @return the title
*/
public String getTitle() {
return getPeerName() + '.' + getSize() + '.' + getClientConfig().getSendFrequency();
}
/**
* summary. includes:name, size, sendfrequency, and # of hops
* @return the summary
*/
public String getSummary() {
return "Send peer " + getPeerName() + ' ' + getSize() + " every " +
getClientConfig().getSendFrequency() + " seconds through " +
getClientConfig().getNumHops() + "-hop tunnels";
}
private String getSize() {
int bytes = getClientConfig().getSendSize();
if (bytes < 1024)
return bytes + "b";
return bytes/1024 + "kb";
}
/**
* we've got someone who wants to be notified of changes to the plot config
* @param lsnr the listener to be added
*/
public void addListener(UpdateListener lsnr) { _listeners.add(lsnr); }
/**
* remove a listener
* @param lsnr the listener to remove
*/
public void removeListener(UpdateListener lsnr) { _listeners.remove(lsnr); }
void fireUpdate() {
if (_disabled) return;
for (Iterator iter = _listeners.iterator(); iter.hasNext(); ) {
((UpdateListener)iter.next()).configUpdated(this);
}
}
/**
* Disables notification of events listeners
* @see PeerPlotConfig#fireUpdate()
*/
public void disableEvents() { _disabled = true; }
/**
* Enables notification of events listeners
* @see PeerPlotConfig#fireUpdate()
*/
public void enableEvents() { _disabled = false; }
/**
* How do we want to render a particular dataset (either the current or the averaged values)?
*/
public class PlotSeriesConfig {
private long _period;
private boolean _plotSendTime;
private boolean _plotReceiveTime;
private boolean _plotLostMessages;
private Color _plotLineColor;
/**
* Delegating constructor . . .
* @param period the period for the config
* (0 for current, otherwise # of milliseconds being averaged over)
*/
public PlotSeriesConfig(long period) {
this(period, false, false, false, null);
if (period <= 0) {
_plotSendTime = true;
_plotReceiveTime = true;
_plotLostMessages = true;
}
}
/**
* Creates a config for the rendering of a particular dataset)
* @param period the period for the config
* (0 for current, otherwise # of milliseconds being averaged over)
* @param plotSend do we plot send times?
* @param plotReceive do we plot receive times?
* @param plotLost do we plot lost packets?
* @param plotColor in what color?
*/
public PlotSeriesConfig(long period, boolean plotSend, boolean plotReceive, boolean plotLost, Color plotColor) {
_period = period;
_plotSendTime = plotSend;
_plotReceiveTime = plotReceive;
_plotLostMessages = plotLost;
_plotLineColor = plotColor;
}
/**
* Retrieves the plot config this plot series config is a part of
* @return the plot config
*/
public PeerPlotConfig getPlotConfig() { return PeerPlotConfig.this; }
/**
* What period is this series config describing?
* @return 0 for current, otherwise # milliseconds that are being averaged over
*/
public long getPeriod() { return _period; }
/**
* Sets the period this series config is describing
* @param period the period
* (0 for current, otherwise # milliseconds that are being averaged over)
*/
public void setPeriod(long period) {
_period = period;
fireUpdate();
}
/**
* Should we render the time to send (ping to peer)?
* @return true or false . . .
*/
public boolean getPlotSendTime() { return _plotSendTime; }
/**
* Sets whether we render the time to send (ping to peer) or not
* @param shouldPlot true or false
*/
public void setPlotSendTime(boolean shouldPlot) {
_plotSendTime = shouldPlot;
fireUpdate();
}
/**
* Should we render the time to receive (peer pong to us)?
* @return true or false . . .
*/
public boolean getPlotReceiveTime() { return _plotReceiveTime; }
/**
* Sets whether we render the time to receive (peer pong to us)
* @param shouldPlot true or false
*/
public void setPlotReceiveTime(boolean shouldPlot) {
_plotReceiveTime = shouldPlot;
fireUpdate();
}
/**
* Should we render the number of messages lost (ping sent, no pong received in time)?
* @return true or false . . .
*/
public boolean getPlotLostMessages() { return _plotLostMessages; }
/**
* Sets whether we render the number of messages lost (ping sent, no pong received in time) or not
* @param shouldPlot true or false
*/
public void setPlotLostMessages(boolean shouldPlot) {
_plotLostMessages = shouldPlot;
fireUpdate();
}
/**
* What color should we plot the data with?
* @return the color
*/
public Color getPlotLineColor() { return _plotLineColor; }
/**
* Sets the color we should plot the data with
* @param color the color to use
*/
public void setPlotLineColor(Color color) {
_plotLineColor = color;
fireUpdate();
}
}
/**
* An interface for listening to updates . . .
*/
public interface UpdateListener {
/**
* @param config the peer plot config that changes
* @see PeerPlotConfig#fireUpdate()
*/
void configUpdated(PeerPlotConfig config);
}
}

View File

@ -1,371 +0,0 @@
package net.i2p.heartbeat.gui;
import java.awt.Color;
import java.awt.GridBagConstraints;
import java.awt.GridBagLayout;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.util.List;
import java.util.Random;
import javax.swing.JButton;
import javax.swing.JCheckBox;
import javax.swing.JColorChooser;
import javax.swing.JLabel;
import javax.swing.JPanel;
import javax.swing.JScrollPane;
import javax.swing.JTextArea;
import javax.swing.JTextField;
import net.i2p.util.Log;
class PeerPlotConfigPane extends JPanel implements PeerPlotConfig.UpdateListener {
private final static Log _log = new Log(PeerPlotConfigPane.class);
private PeerPlotConfig _config;
private HeartbeatControlPane _parent;
private JLabel _title;
private JButton _delete;
private JLabel _fromLabel;
private JTextField _from;
private JTextArea _comments;
private JLabel _peerLabel;
private JTextField _peerKey;
private JLabel _localLabel;
private JTextField _localKey;
private OptionLine _options[];
private Random _rnd = new Random();
private final static Color WHITE = new Color(255, 255, 255);
private Color _background = WHITE;
/**
* Constructs a pane
* @param config the plot config it represents
* @param pane the pane this one is attached to
*/
public PeerPlotConfigPane(PeerPlotConfig config, HeartbeatControlPane pane) {
_config = config;
_parent = pane;
if (_parent != null)
_background = _parent.getBackground();
_config.addListener(this);
initializeComponents();
}
/** called when the user wants to stop monitoring this test */
private void delete() {
_parent.removeTest(_config);
}
private void initializeComponents() {
buildComponents();
placeComponents(this);
refreshView();
//setBorder(new BevelBorder(BevelBorder.RAISED));
setBackground(_background);
}
/**
* place all the gui components onto the given panel
* @param body the panel to place the components on
*/
private void placeComponents(JPanel body) {
body.setLayout(new GridBagLayout());
GridBagConstraints cts = new GridBagConstraints();
// row 0: title + delete
cts.gridx = 0;
cts.gridy = 0;
cts.gridwidth = 5;
cts.anchor = GridBagConstraints.WEST;
cts.fill = GridBagConstraints.NONE;
body.add(_title, cts);
cts.gridx = 5;
cts.gridwidth = 1;
cts.anchor = GridBagConstraints.NORTHWEST;
cts.fill = GridBagConstraints.BOTH;
body.add(_delete, cts);
// row 1: from + location
cts.gridx = 0;
cts.gridy = 1;
cts.gridwidth = 1;
cts.fill = GridBagConstraints.NONE;
body.add(_fromLabel, cts);
cts.gridx = 1;
cts.gridwidth = 5;
cts.fill = GridBagConstraints.BOTH;
body.add(_from, cts);
// row 2: comment
cts.gridx = 0;
cts.gridy = 2;
cts.gridwidth = 6;
cts.fill = GridBagConstraints.BOTH;
body.add(_comments, cts);
// row 3: peer + peerKey
cts.gridx = 0;
cts.gridy = 3;
cts.gridwidth = 1;
cts.fill = GridBagConstraints.NONE;
body.add(_peerLabel, cts);
cts.gridx = 1;
cts.gridwidth = 5;
cts.fill = GridBagConstraints.BOTH;
body.add(_peerKey, cts);
// row 4: local + localKey
cts.gridx = 0;
cts.gridy = 4;
cts.gridwidth = 1;
cts.fill = GridBagConstraints.NONE;
body.add(_localLabel, cts);
cts.gridx = 1;
cts.gridwidth = 5;
cts.fill = GridBagConstraints.BOTH;
body.add(_localKey, cts);
// row 5-N: data row
for (int i = 0; i < _options.length; i++) {
cts.gridx = 0;
cts.gridy = 5 + i;
cts.gridwidth = 1;
cts.fill = GridBagConstraints.NONE;
cts.anchor = GridBagConstraints.WEST;
if (_options[i]._durationMinutes <= 0)
body.add(new JLabel("Data: "), cts);
else
body.add(new JLabel(_options[i]._durationMinutes + "m avg: "), cts);
cts.gridx = 1;
body.add(_options[i]._send, cts);
cts.gridx = 2;
body.add(_options[i]._recv, cts);
cts.gridx = 3;
body.add(_options[i]._lost, cts);
cts.gridx = 4;
body.add(_options[i]._all, cts);
cts.gridx = 5;
body.add(_options[i]._color, cts);
}
}
/** build all of the gui components */
private void buildComponents() {
_title = new JLabel(_config.getSummary());
_title.setBackground(_background);
_delete = new JButton("Delete");
_delete.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent evt) { delete(); } });
_delete.setEnabled(false);
_delete.setBackground(_background);
_fromLabel = new JLabel("Location: ");
_fromLabel.setBackground(_background);
_from = new JTextField(_config.getLocation());
_from.setEditable(false);
_from.setBackground(_background);
_comments = new JTextArea(_config.getClientConfig().getComment(), 2, 20);
// _comments = new JTextArea(_config.getClientConfig().getComment(), 2, 40);
_comments.setEditable(false);
_comments.setBackground(_background);
_peerLabel = new JLabel("Peer: ");
_peerLabel.setBackground(_background);
_peerKey = new JTextField(_config.getClientConfig().getPeer().toBase64(), 8);
_peerKey.setBackground(_background);
_localLabel = new JLabel("Local: ");
_localLabel.setBackground(_background);
_localKey = new JTextField(_config.getClientConfig().getUs().toBase64(), 8);
_localKey.setBackground(_background);
int averagedPeriods[] = _config.getClientConfig().getAveragePeriods();
if (averagedPeriods == null)
averagedPeriods = new int[0];
_options = new OptionLine[1 + averagedPeriods.length];
_options[0] = new OptionLine(0);
for (int i = 0; i < averagedPeriods.length; i++) {
_options[1+i] = new OptionLine(averagedPeriods[i]);
}
}
/** the settings have changed - revise */
private void refreshView() {
for (int i = 0; i < _options.length; i++) {
PeerPlotConfig.PlotSeriesConfig cfg = getConfig(_options[i]._durationMinutes);
if (cfg == null) {
_log.warn("Config for minutes " + _options[i]._durationMinutes + " was not found?");
continue;
}
//_log.debug("Refreshing view for minutes ["+ _options[i]._durationMinutes + "]: send [" +
// _options[i]._send.isSelected() + "/" + cfg.getPlotSendTime() + "] recv [" +
// _options[i]._recv.isSelected() + "/" + cfg.getPlotReceiveTime() + "] lost [" +
// _options[i]._lost.isSelected() + "/" + cfg.getPlotLostMessages() + "]");
_options[i]._send.setSelected(cfg.getPlotSendTime());
_options[i]._recv.setSelected(cfg.getPlotReceiveTime());
_options[i]._lost.setSelected(cfg.getPlotLostMessages());
if (cfg.getPlotLineColor() != null)
_options[i]._color.setBackground(cfg.getPlotLineColor());
}
}
/**
* find the right config for the given period
* @param minutes the minutes to locate the config by
* @return the config for the given period, or null
*/
private PeerPlotConfig.PlotSeriesConfig getConfig(int minutes) {
if (minutes <= 0)
return _config.getCurrentSeriesConfig();
List configs = _config.getAverageSeriesConfigs();
for (int i = 0; i < configs.size(); i++) {
PeerPlotConfig.PlotSeriesConfig cfg = (PeerPlotConfig.PlotSeriesConfig)configs.get(i);
if (cfg.getPeriod() == minutes * 60*1000)
return cfg;
}
return null;
}
/**
* notified that the config has been updated
* @param config the config that was been updated
*/
public void configUpdated(PeerPlotConfig config) { refreshView(); }
private class ChooseColor implements ActionListener {
private int _minutes;
private JButton _button;
/**
* @param minutes the minutes (line) to change the color of...
* @param button the associated button
*/
public ChooseColor(int minutes, JButton button) {
_minutes = minutes;
_button = button;
}
/* (non-Javadoc)
* @see java.awt.event.ActionListener#actionPerformed(java.awt.event.ActionEvent)
*/
public void actionPerformed(ActionEvent evt) {
PeerPlotConfig.PlotSeriesConfig cfg = getConfig(_minutes);
Color origColor = null;
if (cfg != null)
origColor = cfg.getPlotLineColor();
Color color = JColorChooser.showDialog(PeerPlotConfigPane.this, "What color should this line be?", origColor);
if (color != null) {
if (cfg != null)
cfg.setPlotLineColor(color);
_button.setBackground(color);
}
}
}
private class OptionLine {
int _durationMinutes;
JCheckBox _send;
JCheckBox _recv;
JCheckBox _lost;
JCheckBox _all;
JButton _color;
/**
* Creates an OptionLine.
* @param durationMinutes the minutes =)
*/
public OptionLine(int durationMinutes) {
_durationMinutes = durationMinutes;
_send = new JCheckBox("send time");
_send.setBackground(_background);
_recv = new JCheckBox("receive time");
_recv.setBackground(_background);
_lost = new JCheckBox("lost messages");
_lost.setBackground(_background);
_all = new JCheckBox("all");
_all.setBackground(_background);
_color = new JButton("color");
int r = _rnd.nextInt(255);
if (r < 0) r = -r;
int g = _rnd.nextInt(255);
if (g < 0) g = -g;
int b = _rnd.nextInt(255);
if (b < 0) b = -b;
//_color.setBackground(new Color(r, g, b));
_color.setBackground(_background);
_send.addActionListener(new UpdateListener(OptionLine.this, _durationMinutes));
_recv.addActionListener(new UpdateListener(OptionLine.this, _durationMinutes));
_lost.addActionListener(new UpdateListener(OptionLine.this, _durationMinutes));
_all.addActionListener(new UpdateListener(OptionLine.this, _durationMinutes));
_color.addActionListener(new ChooseColor(durationMinutes, _color));
_color.setEnabled(false);
}
}
private class UpdateListener implements ActionListener {
private OptionLine _line;
private int _minutes;
/**
* Update Listener constructor . . .
* @param line the line
* @param minutes the minutes
*/
public UpdateListener(OptionLine line, int minutes) {
_line = line;
_minutes = minutes;
}
/* (non-Javadoc)
* @see java.awt.event.ActionListener#actionPerformed(java.awt.event.ActionEvent)
*/
public void actionPerformed(ActionEvent evt) {
PeerPlotConfig.PlotSeriesConfig cfg = getConfig(_minutes);
cfg.getPlotConfig().disableEvents();
_log.debug("Updating data for minutes ["+ _line._durationMinutes + "]: send [" +
_line._send.isSelected() + "/" + cfg.getPlotSendTime() + "] recv [" +
_line._recv.isSelected() + "/" + cfg.getPlotReceiveTime() + "] lost [" +
_line._lost.isSelected() + "/" + cfg.getPlotLostMessages() + "]: config = " + cfg);
boolean force = _line._all.isSelected();
cfg.setPlotSendTime(_line._send.isSelected() || force);
cfg.setPlotReceiveTime(_line._recv.isSelected() || force);
cfg.setPlotLostMessages(_line._lost.isSelected() || force);
cfg.getPlotConfig().enableEvents();
cfg.getPlotConfig().fireUpdate();
}
}
/**
* Unit test stuff
* @param args da arsg
*/
public final static void main(String args[]) {
Test t = new Test();
t.runTest();
}
private final static class Test implements PeerPlotStateFetcher.FetchStateReceptor {
/**
* Runs da test
*/
public void runTest() {
PeerPlotConfig cfg = new PeerPlotConfig("C:\\testnet\\r2\\heartbeatStat_10s_30kb.txt");
PeerPlotState state = new PeerPlotState(cfg);
PeerPlotStateFetcher.fetchPeerPlotState(this, state);
try { Thread.sleep(60*1000); } catch (InterruptedException ie) {}
System.exit(-1);
}
/* (non-Javadoc)
* @see net.i2p.heartbeat.gui.PeerPlotStateFetcher.FetchStateReceptor#peerPlotStateFetched(net.i2p.heartbeat.gui.PeerPlotState)
*/
public void peerPlotStateFetched(PeerPlotState state) {
javax.swing.JFrame f = new javax.swing.JFrame("Test");
f.getContentPane().add(new JScrollPane(new PeerPlotConfigPane(state.getPlotConfig(), null)));
f.pack();
f.setVisible(true);
}
}
}

View File

@ -1,95 +0,0 @@
package net.i2p.heartbeat.gui;
/**
* Current data + plot config for a particular test
*
*/
class PeerPlotState {
private StaticPeerData _currentData;
private PeerPlotConfig _plotConfig;
/**
* Delegating constructor . . .
* @see PeerPlotState#PeerPlotState(PeerPlotConfig, StaticPeerData)
*/
public PeerPlotState() {
this(null, null);
}
/**
* Delegating constructor . . .
* @param config plot config
* @see PeerPlotState#PeerPlotState(PeerPlotConfig, StaticPeerData)
*/
public PeerPlotState(PeerPlotConfig config) {
this(config, new StaticPeerData(config.getClientConfig()));
}
/**
* Creates a PeerPlotState
* @param config plot config
* @param data peer data
*/
public PeerPlotState(PeerPlotConfig config, StaticPeerData data) {
_plotConfig = config;
_currentData = data;
}
/**
* Add an average
* @param minutes mins averaged over
* @param sendMs how much later did the peer receieve
* @param recvMs how much later did we receieve
* @param lost how many were lost
*/
public void addAverage(int minutes, int sendMs, int recvMs, int lost) {
// make sure we've got the config entry for the average
_plotConfig.addAverage(minutes);
// add the data point...
_currentData.addAverage(minutes, sendMs, recvMs, lost);
}
/**
* we successfully got a ping/pong through
*
* @param sendTime when did the ping get sent?
* @param sendMs how much later did the peer receive the ping?
* @param recvMs how much later than that did we receive the pong?
*/
public void addSuccess(long sendTime, int sendMs, int recvMs) {
_currentData.addData(sendTime, sendMs, recvMs);
}
/**
* we lost a ping/pong
*
* @param sendTime when did we send the ping?
*/
public void addLost(long sendTime) {
_currentData.addData(sendTime);
}
/**
* data set to render
* @return the data set
*/
public StaticPeerData getCurrentData() { return _currentData; }
/**
* Sets the data set to render
* @param data the data set
*/
public void setCurrentData(StaticPeerData data) { _currentData = data; }
/**
* configuration options on how to render the data set
* @return the config options
*/
public PeerPlotConfig getPlotConfig() { return _plotConfig; }
/**
* Sets the configuration options on how to render the data
* @param config the config options
*/
public void setPlotConfig(PeerPlotConfig config) { _plotConfig = config; }
}

View File

@ -1,363 +0,0 @@
package net.i2p.heartbeat.gui;
import java.io.BufferedReader;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.MalformedURLException;
import java.net.URL;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Locale;
import java.util.StringTokenizer;
import net.i2p.data.DataFormatException;
import net.i2p.data.Destination;
import net.i2p.util.I2PThread;
import net.i2p.util.Log;
class PeerPlotStateFetcher {
private final static Log _log = new Log(PeerPlotStateFetcher.class);
/**
* Fetch and fill the specified state structure
* @param receptor the 'receptor' (callbacks)
* @param state the state
*/
public static void fetchPeerPlotState(FetchStateReceptor receptor, PeerPlotState state) {
I2PThread t = new I2PThread(new Fetcher(receptor, state));
t.setDaemon(true);
t.setName("Fetch state from " + state.getPlotConfig().getLocation());
t.start();
}
/**
* Callback stuff . . .
*/
public interface FetchStateReceptor {
/**
* Called when a peer plot state is fetched
* @param state state that was fetched
*/
void peerPlotStateFetched(PeerPlotState state);
}
private static class Fetcher implements Runnable {
private PeerPlotState _state;
private FetchStateReceptor _receptor;
/**
* Creates a Fetcher thread
* @param receptor the 'receptor' (callbacks)
* @param state the state
*/
public Fetcher(FetchStateReceptor receptor, PeerPlotState state) {
_state = state;
_receptor = receptor;
}
/* (non-Javadoc)
* @see java.lang.Runnable#run()
*/
public void run() {
String loc = _state.getPlotConfig().getLocation();
_log.debug("Load called [" + loc + "]");
InputStream in = null;
try {
try {
URL location = new URL(loc);
in = location.openStream();
} catch (MalformedURLException mue) {
_log.debug("Not a url [" + loc + "]");
in = null;
}
if (in == null)
in = new FileInputStream(loc);
BufferedReader reader = new BufferedReader(new InputStreamReader(in));
String line = null;
while ( (line = reader.readLine()) != null) {
handleLine(line);
}
if (valid())
_receptor.peerPlotStateFetched(_state);
} catch (IOException ioe) {
_log.error("Error retrieving from the location [" + loc + "]", ioe);
} finally {
if (in != null) try { in.close(); } catch (IOException ioe) {}
}
}
/**
* check to make sure we've got everything we need
* @return true [always]
*/
boolean valid() {
return true;
}
/**
* handle a line from the data set - these can be formatted in one of the
* following ways. <p />
*
* <pre>
* peer khWYqCETu9YtPUvGV92ocsbEW5DezhKlIG7ci8RLX3g=
* local u-9hlR1ik2hemXf0HvKMfeRgrS86CbNQh25e7XBhaQE=
* peerDest [base 64 of the full destination]
* localDest [base 64 of the full destination]
* numTunnelHops 2
* comment Test with localhost sending 30KB every 20 seconds
* sendFrequency 20
* sendSize 30720
* sessionStart 20040409.22:51:10.915
* currentTime 20040409.23:31:39.607
* numPending 2
* lifetimeSent 118
* lifetimeRecv 113
* #averages minutes sendMs recvMs numLost
* periodAverage 1 1843 771 0
* periodAverage 5 786 752 1
* periodAverage 30 855 735 3
* #action status date and time sent sendMs replyMs
* EVENT OK 20040409.23:21:44.742 691 670
* EVENT OK 20040409.23:22:05.201 671 581
* EVENT OK 20040409.23:22:26.301 1182 1452
* EVENT OK 20040409.23:22:47.322 24304 1723
* EVENT OK 20040409.23:23:08.232 2293 1081
* EVENT OK 20040409.23:23:29.332 1392 641
* EVENT OK 20040409.23:23:50.262 641 761
* EVENT OK 20040409.23:24:11.102 651 701
* EVENT OK 20040409.23:24:31.401 841 621
* EVENT OK 20040409.23:24:52.061 651 681
* EVENT OK 20040409.23:25:12.480 701 1623
* EVENT OK 20040409.23:25:32.990 1442 1212
* EVENT OK 20040409.23:25:54.230 591 631
* EVENT OK 20040409.23:26:14.620 620 691
* EVENT OK 20040409.23:26:35.199 1793 1432
* EVENT OK 20040409.23:26:56.570 661 641
* EVENT OK 20040409.23:27:17.200 641 660
* EVENT OK 20040409.23:27:38.120 611 921
* EVENT OK 20040409.23:27:58.699 831 621
* EVENT OK 20040409.23:28:19.559 801 661
* EVENT OK 20040409.23:28:40.279 601 611
* EVENT OK 20040409.23:29:00.648 601 621
* EVENT OK 20040409.23:29:21.288 701 661
* EVENT LOST 20040409.23:29:41.828
* EVENT LOST 20040409.23:30:02.327
* EVENT LOST 20040409.23:30:22.656
* EVENT OK 20040409.23:31:24.305 1843 771
* </pre>
*
* @param line (see above)
*/
private void handleLine(String line) {
if (line.startsWith("peerDest"))
handlePeerDest(line);
else if (line.startsWith("localDest"))
handleLocalDest(line);
else if (line.startsWith("numTunnelHops"))
handleNumTunnelHops(line);
else if (line.startsWith("comment"))
handleComment(line);
else if (line.startsWith("sendFrequency"))
handleSendFrequency(line);
else if (line.startsWith("sendSize"))
handleSendSize(line);
else if (line.startsWith("periodAverage"))
handlePeriodAverage(line);
else if (line.startsWith("EVENT"))
handleEvent(line);
else if (line.startsWith("numPending"))
handleNumPending(line);
else if (line.startsWith("sessionStart"))
handleSessionStart(line);
else
_log.debug("Not handled: " + line);
}
private void handlePeerDest(String line) {
StringTokenizer tok = new StringTokenizer(line);
tok.nextToken(); // ignore;
String destKey = tok.nextToken();
try {
Destination d = new Destination();
d.fromBase64(destKey);
_state.getPlotConfig().getClientConfig().setPeer(d);
_log.debug("Setting the peer to " + d.calculateHash().toBase64());
} catch (DataFormatException dfe) {
_log.error("Unable to parse the peerDest line: [" + line + "]", dfe);
}
}
private void handleLocalDest(String line) {
StringTokenizer tok = new StringTokenizer(line);
tok.nextToken(); // ignore;
String destKey = tok.nextToken();
try {
Destination d = new Destination();
d.fromBase64(destKey);
_state.getPlotConfig().getClientConfig().setUs(d);
} catch (DataFormatException dfe) {
_log.error("Unable to parse the localDest line: [" + line + "]", dfe);
}
}
private void handleComment(String line) {
StringTokenizer tok = new StringTokenizer(line);
tok.nextToken(); // ignore;
StringBuffer buf = new StringBuffer(line.length()-32);
while (tok.hasMoreTokens())
buf.append(tok.nextToken()).append(' ');
_state.getPlotConfig().getClientConfig().setComment(buf.toString());
}
private void handleNumTunnelHops(String line) {
StringTokenizer tok = new StringTokenizer(line);
tok.nextToken(); // ignore;
String num = tok.nextToken();
try {
int val = Integer.parseInt(num);
_state.getPlotConfig().getClientConfig().setNumHops(val);
} catch (NumberFormatException nfe) {
_log.error("Unable to parse the numTunnelHops line: [" + line + "]", nfe);
}
}
private void handleNumPending(String line) {
StringTokenizer tok = new StringTokenizer(line);
tok.nextToken(); // ignore;
String num = tok.nextToken();
try {
int val = Integer.parseInt(num);
_state.getCurrentData().setPendingCount(val);
} catch (NumberFormatException nfe) {
_log.error("Unable to parse the numPending line: [" + line + "]", nfe);
}
}
private void handleSendFrequency(String line) {
StringTokenizer tok = new StringTokenizer(line);
tok.nextToken(); // ignore;
String num = tok.nextToken();
try {
int val = Integer.parseInt(num);
_state.getPlotConfig().getClientConfig().setSendFrequency(val);
} catch (NumberFormatException nfe) {
_log.error("Unable to parse the sendFrequency line: [" + line + "]", nfe);
}
}
private void handleSendSize(String line) {
StringTokenizer tok = new StringTokenizer(line);
tok.nextToken(); // ignore;
String num = tok.nextToken();
try {
int val = Integer.parseInt(num);
_state.getPlotConfig().getClientConfig().setSendSize(val);
} catch (NumberFormatException nfe) {
_log.error("Unable to parse the sendSize line: [" + line + "]", nfe);
}
}
private void handleSessionStart(String line) {
StringTokenizer tok = new StringTokenizer(line);
tok.nextToken(); // ignore;
String date = tok.nextToken();
try {
long when = getDate(date);
_state.getCurrentData().setSessionStart(when);
} catch (NumberFormatException nfe) {
_log.error("Unable to parse the sessionStart line: [" + line + "]", nfe);
}
}
private void handlePeriodAverage(String line) {
StringTokenizer tok = new StringTokenizer(line);
tok.nextToken(); // ignore;
try {
// periodAverage minutes sendMs recvMs numLost
int min = Integer.parseInt(tok.nextToken());
int send = Integer.parseInt(tok.nextToken());
int recv = Integer.parseInt(tok.nextToken());
int lost = Integer.parseInt(tok.nextToken());
_state.addAverage(min, send, recv, lost);
} catch (NumberFormatException nfe) {
_log.error("Unable to parse the sendSize line: [" + line + "]", nfe);
}
}
private void handleEvent(String line) {
StringTokenizer tok = new StringTokenizer(line);
// * EVENT OK 20040409.23:29:21.288 701 661
// * EVENT LOST 20040409.23:29:41.828
tok.nextToken(); // ignore first two
tok.nextToken();
try {
long when = getDate(tok.nextToken());
if (when < 0) {
_log.error("Invalid EVENT line: [" + line + "]");
return;
}
if (tok.hasMoreTokens()) {
int sendMs = Integer.parseInt(tok.nextToken());
int recvMs = Integer.parseInt(tok.nextToken());
_state.addSuccess(when, sendMs, recvMs);
} else {
_state.addLost(when);
}
} catch (NumberFormatException nfe) {
_log.error("Unable to parse the EVENT line: [" + line + "]", nfe);
}
}
private static final SimpleDateFormat _fmt = new SimpleDateFormat("yyyyMMdd.HH:mm:ss.SSS", Locale.UK);
private long getDate(String date) {
synchronized (_fmt) {
try {
return _fmt.parse(date).getTime();
} catch (ParseException pe) {
_log.error("Unable to parse the date [" + date + "]", pe);
return -1;
}
}
}
private void fakeRun() { /* UNUSED */
try {
Destination peer = new Destination();
Destination us = new Destination();
peer.fromBase64("3RPLOkQGlq8anNyNWhjbMyHxpAvUyUJKbiUejI80DnPR59T3blc7-XrBhQ2iPbf-BRAR~v1j34Kpba1eDyhPk2gevsE6ULO1irarJ3~C9WcQH2wAbNiVwfWqbh6onQ~YmkSpGNwGHD6ytwbvTyXeBJ" +
"cS8e6gmfNN-sYLn1aQu8UqWB3D6BmTfLtyS3eqWVk66Nrzmwy8E1Hvq5z~1lukYb~cyiDO1oZHAOLyUQtd9eN16yJY~2SRG8LiscpPMl9nSJUr6fmXMUubW-M7QGFH82Om-735PJUk6WMy1Hi9Vgh4Pxhdl7g" +
"fqGRWioFABdhcypb7p1Ca77p73uabLDFK-SjIYmdj7TwSdbNa6PCmzEvCEW~IZeZmnZC5B6pK30AdmD9vc641wUGce9xTJVfNRupf5L7pSsVIISix6FkKQk-FTW2RsZKLbuMCYMaPzLEx5gzODEqtI6Jf2teM" +
"d5xCz51RPayDJl~lJ-W0IWYfosnjM~KxYaqc4agviBuF5ZWeAAAA");
us.fromBase64("W~JFpqSH8uopylox2V5hMbpcHSsb-dJkSKvdJ1vj~KQcUFJWXFyfbetBAukcGH5S559aK9oslU0qbVoMDlJITVC4OXfXSnVbJBP1IhsK8SvjSYicjmIi2fA~k4HvSh9Wxu~bg8yo~jgfHA8tjYpp" +
"K9QKc56BpkJb~hx0nNGy4Ny9eW~6A5AwAmHvwdt5NqcREYRMjRd63dMGm8BcEe-6FbOyMo3dnIFcETWAe8TCeoMxm~S1n~6Jlinw3ETxv-L6lQkhFFWnC5zyzQ~4JhVxxT3taTMYXg8td4CBGmrS078jcjW63" +
"rlSiQgZBlYfN3iEYmurhuIEV9NXRcmnMrBOQUAoXPpVuRIxJbaQNDL71FO2iv424n4YjKs84suAho34GGQKq7WoL5V5KQgihfcl0f~xne-qP3FtpoPFeyA9x-sA2JWDAsxoZlfvgkiP5eyOn23prT9TJK47HC" +
"VilHSV11uTVaC4Jc5YsjoBCZadWbgQnMCKlZ4jk-bLE1PSWLg7AAAA");
_state.getPlotConfig().getClientConfig().setPeer(peer);
_state.getPlotConfig().getClientConfig().setUs(us);
_state.getPlotConfig().getClientConfig().setNumHops(2);
_state.getPlotConfig().getClientConfig().setComment("we do stuff\nreally nifty stuff. really");
_state.getPlotConfig().getClientConfig().setAveragePeriods(new int[] { 1, 5, 30, 60 });
int rnd = new java.util.Random().nextInt();
if (rnd > 0)
rnd = rnd % 10;
else
rnd = (-rnd) % 10;
_state.getPlotConfig().getClientConfig().setSendFrequency(rnd);
_state.getPlotConfig().getClientConfig().setSendSize(16*1024);
_state.getPlotConfig().getClientConfig().setStatDuration(10);
_state.getPlotConfig().rebuildAverageSeriesConfigs();
_state.setCurrentData(new StaticPeerData(_state.getPlotConfig().getClientConfig()));
_receptor.peerPlotStateFetched(_state);
} catch (Exception e) {
e.printStackTrace();
}
}
}
}

View File

@ -1,134 +0,0 @@
package net.i2p.heartbeat.gui;
import java.util.HashMap;
import java.util.Map;
import net.i2p.heartbeat.ClientConfig;
import net.i2p.heartbeat.PeerData;
/**
* Raw data points for a test
*/
class StaticPeerData extends PeerData {
private int _pending;
/** Integer (period, in minutes) to Integer (milliseconds) for sending a ping */
private Map _averageSendTimes;
/** Integer (period, in minutes) to Integer (milliseconds) for receiving a pong */
private Map _averageReceiveTimes;
/** Integer (period, in minutes) to Integer (num messages) of how many messages were lost on average */
private Map _lostMessages;
/**
* Creates a static peer data with a specified client config ... duh
* @param config the client config
*/
public StaticPeerData(ClientConfig config) {
super(config);
_averageSendTimes = new HashMap(4);
_averageReceiveTimes = new HashMap(4);
_lostMessages = new HashMap(4);
}
/**
* Adds averaged data
* @param minutes the minutes (averaged over)
* @param sendMs the send time (ping) in milliseconds
* @param recvMs the receive time (pong) in milliseconds
* @param lost the number lost
*/
public void addAverage(int minutes, int sendMs, int recvMs, int lost) {
_averageSendTimes.put(new Integer(minutes), new Integer(sendMs));
_averageReceiveTimes.put(new Integer(minutes), new Integer(recvMs));
_lostMessages.put(new Integer(minutes), new Integer(lost));
}
/**
* Sets the number pending
* @param numPending the number pending
*/
public void setPendingCount(int numPending) { _pending = numPending; }
/* (non-Javadoc)
* @see net.i2p.heartbeat.PeerData#setSessionStart(long)
*/
public void setSessionStart(long when) { super.setSessionStart(when); }
/**
* Adds data
* @param sendTime the time it was sent
* @param sendMs the send time (ping) in milliseconds
* @param recvMs the receive time (pong) in milliseconds
*/
public void addData(long sendTime, int sendMs, int recvMs) {
PeerData.EventDataPoint dataPoint = new PeerData.EventDataPoint(sendTime);
dataPoint.setPongSent(sendTime + sendMs);
dataPoint.setPongReceived(sendTime + sendMs + recvMs);
dataPoint.setWasPonged(true);
addDataPoint(dataPoint);
}
/**
* Adds data
* @param sendTime the time it was sent
*/
public void addData(long sendTime) {
PeerData.EventDataPoint dataPoint = new PeerData.EventDataPoint(sendTime);
dataPoint.setWasPonged(false);
addDataPoint(dataPoint);
}
/**
* how many pings are still outstanding?
* @return the number of pings outstanding
*/
public int getPendingCount() { return _pending; }
/**
* average time to send over the given period.
*
* @param period number of minutes to retrieve the average for
* @return milliseconds average, or -1 if we dont track that period
*/
public double getAverageSendTime(int period) {
Integer i = (Integer)_averageSendTimes.get(new Integer(period));
if (i == null)
return -1;
return i.doubleValue();
}
/**
* average time to receive over the given period.
*
* @param period number of minutes to retrieve the average for
* @return milliseconds average, or -1 if we dont track that period
*/
public double getAverageReceiveTime(int period) {
Integer i = (Integer)_averageReceiveTimes.get(new Integer(period));
if (i == null)
return -1;
return i.doubleValue();
}
/**
* number of lost messages over the given period.
*
* @param period number of minutes to retrieve the average for
* @return number of lost messages in the period, or -1 if we dont track that period
*/
public double getLostMessages(int period) {
Integer i = (Integer)_lostMessages.get(new Integer(period));
if (i == null)
return -1;
return i.doubleValue();
}
/* (non-Javadoc)
* @see net.i2p.heartbeat.PeerData#cleanup()
*/
public void cleanup() {}
}

View File

@ -1,11 +0,0 @@
$Id$
the i2p/apps/httptunnel module is the root of the
HTTPTunnel application, and everything within it
is released according to the terms of the I2P
license policy. That means everything contained
within the i2p/apps/httptunnel module is released
under the GPL plus the java exception unless
otherwise marked. Alternate licenses that may be
used include BSD, Cryptix, and MIT, as well as
code granted into the public domain.

View File

@ -1,47 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<project basedir="." default="all" name="httptunnel">
<target name="all" depends="clean, build" />
<target name="build" depends="builddep, jar" />
<target name="builddep">
<ant dir="../../ministreaming/java/" target="build" />
<!-- ministreaming will build core -->
</target>
<target name="compile">
<mkdir dir="./build" />
<mkdir dir="./build/obj" />
<javac
srcdir="./src"
debug="true" deprecation="on" source="1.3" target="1.3"
destdir="./build/obj"
classpath="../../../core/java/build/i2p.jar:../../ministreaming/java/build/mstreaming.jar" />
</target>
<target name="jar" depends="compile">
<jar destfile="./build/httptunnel.jar" basedir="./build/obj" includes="**/*.class">
<manifest>
<attribute name="Main-Class" value="net.i2p.httptunnel.HTTPTunnel" />
<attribute name="Class-Path" value="i2p.jar mstreaming.jar" />
</manifest>
</jar>
</target>
<target name="javadoc">
<mkdir dir="./build" />
<mkdir dir="./build/javadoc" />
<javadoc
sourcepath="./src:../../../core/java/src:../../ministreaming/java/src" destdir="./build/javadoc"
packagenames="*"
use="true"
splitindex="true"
windowtitle="HTTPTunnel" />
</target>
<target name="clean">
<delete dir="./build" />
</target>
<target name="cleandep" depends="clean">
<!-- ministreaming will clean core -->
<ant dir="../../ministreaming/java/" target="distclean" />
</target>
<target name="distclean" depends="clean">
<!-- ministreaming will clean core -->
<ant dir="../../ministreaming/java/" target="distclean" />
</target>
</project>

View File

@ -1,87 +0,0 @@
package net.i2p.httptunnel;
import java.io.IOException;
import java.io.OutputStream;
import java.net.InetAddress;
import java.net.ServerSocket;
import java.net.Socket;
import net.i2p.util.Log;
/**
* Listens on a port for HTTP connections.
*/
public class HTTPListener extends Thread {
private static final Log _log = new Log(HTTPListener.class);
private int port;
private String listenHost;
private SocketManagerProducer smp;
/**
* A public constructor. It contstructs things. In this case,
* it constructs a nice HTTPListener, for all your listening on
* HTTP needs. Yep. That's right.
* @param smp A SocketManagerProducer, producing Sockets, no doubt
* @param port A port, to connect to.
* @param listenHost A host, to connect to.
*/
public HTTPListener(SocketManagerProducer smp, int port, String listenHost) {
this.smp = smp;
this.port = port;
start();
}
/* (non-Javadoc)
* @see java.lang.Thread#run()
*/
public void run() {
try {
InetAddress lh = listenHost == null ? null : InetAddress.getByName(listenHost);
ServerSocket ss = new ServerSocket(port, 0, lh);
while (true) {
Socket s = ss.accept();
new HTTPSocketHandler(this, s);
}
} catch (IOException ex) {
_log.error("Error while accepting connections", ex);
}
}
private boolean proxyUsed = false;
/**
* Query whether this is the first use of the proxy or not
* @return Whether this is the first proxy use, no doubt.
*/
public boolean firstProxyUse() {
if (true) return false; // FIXME: check a config option here
if (proxyUsed) {
return false;
}
proxyUsed = true;
return true;
}
/**
* @return The SocketManagerProducer being used.
*/
public SocketManagerProducer getSMP() {
return smp;
}
/**
* Outputs with HTTP 1.1 flair that a feature isn't implemented.
* @param out The stream the text goes to.
* @deprecated
* @throws IOException
*/
public void handleNotImplemented(OutputStream out) throws IOException {
out.write(("HTTP/1.1 200 Document following\n\n" + "<h1>Feature not implemented</h1>").getBytes("ISO-8859-1"));
out.flush();
}
}

View File

@ -1,62 +0,0 @@
package net.i2p.httptunnel;
import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.Socket;
import net.i2p.httptunnel.handler.RootHandler;
import net.i2p.util.Log;
/**
* Handles a single HTTP socket connection.
*/
public class HTTPSocketHandler extends Thread {
private static final Log _log = new Log(HTTPSocketHandler.class);
private Socket s;
private HTTPListener httpl;
private RootHandler h;
/**
* A public constructor.
* @param httpl An HTTPListener, to listen for HTTP, no doubt
* @param s A socket.
*/
public HTTPSocketHandler(HTTPListener httpl, Socket s) {
this.httpl = httpl;
this.s = s;
h = RootHandler.getInstance();
start();
}
/* (non-Javadoc)
* @see java.lang.Thread#run()
*/
public void run() {
InputStream in = null;
OutputStream out = null;
try {
in = new BufferedInputStream(s.getInputStream());
out = new BufferedOutputStream(s.getOutputStream());
Request req = new Request(in);
h.handle(req, httpl, out);
} catch (IOException ex) {
_log.error("Error while handling data", ex);
} finally {
try {
if (in != null) in.close();
if (out != null) {
out.flush();
out.close();
}
s.close();
} catch (IOException ex) {
_log.error("IOException in finalizer", ex);
}
}
}
}

View File

@ -1,108 +0,0 @@
/*
* HTTPTunnel
* (c) 2003 - 2004 mihi
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation; either version 2, or (at
* your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; see the file COPYING. If not, write to
* the Free Software Foundation, Inc., 59 Temple Place - Suite 330,
* Boston, MA 02111-1307, USA.
*
* In addition, as a special exception, mihi gives permission to link
* the code of this program with the proprietary Java implementation
* provided by Sun (or other vendors as well), and distribute linked
* combinations including the two. You must obey the GNU General
* Public License in all respects for all of the code used other than
* the proprietary Java implementation. If you modify this file, you
* may extend this exception to your version of the file, but you are
* not obligated to do so. If you do not wish to do so, delete this
* exception statement from your version.
*/
package net.i2p.httptunnel;
import net.i2p.client.streaming.I2PSocketManager;
/**
* HTTPTunnel main class.
*/
public class HTTPTunnel {
/**
* Create a HTTPTunnel instance.
*
* @param initialManagers a list of socket managers to use
* @param maxManagers how many managers to have in the cache
* @param shouldThrowAwayManagers whether to throw away a manager after use
* @param listenPort which port to listen on
*/
public HTTPTunnel(I2PSocketManager[] initialManagers, int maxManagers, boolean shouldThrowAwayManagers,
int listenPort) {
this(initialManagers, maxManagers, shouldThrowAwayManagers, listenPort, "127.0.0.1", 7654);
}
/**
* Create a HTTPTunnel instance.
*
* @param initialManagers a list of socket managers to use
* @param maxManagers how many managers to have in the cache
* @param shouldThrowAwayManagers whether to throw away a manager after use
* @param listenPort which port to listen on
* @param i2cpAddress the I2CP address
* @param i2cpPort the I2CP port
*/
public HTTPTunnel(I2PSocketManager[] initialManagers, int maxManagers, boolean shouldThrowAwayManagers,
int listenPort, String i2cpAddress, int i2cpPort) {
SocketManagerProducer smp = new SocketManagerProducer(initialManagers, maxManagers, shouldThrowAwayManagers,
i2cpAddress, i2cpPort);
new HTTPListener(smp, listenPort, "127.0.0.1");
}
/**
* The all important main function, allowing HTTPTunnel to be
* stand-alone, a program in it's own right, and all that jazz.
* @param args A list of String passed to the program
*/
public static void main(String[] args) {
String host = "127.0.0.1";
int port = 7654, max = 1;
boolean throwAwayManagers = false;
if (args.length > 1) {
if (args.length == 4) {
host = args[2];
port = Integer.parseInt(args[3]);
} else if (args.length != 2) {
showInfo();
return;
}
max = Integer.parseInt(args[1]);
} else if (args.length != 1) {
showInfo();
return;
}
if (max == 0) {
max = 1;
} else if (max < 0) {
max = -max;
throwAwayManagers = true;
}
new HTTPTunnel(null, max, throwAwayManagers, Integer.parseInt(args[0]), host, port);
}
private static void showInfo() {
System.out.println("Usage: java HTTPTunnel <listenPort> [<max> " + "[<i2cphost> <i2cpport>]]\n"
+ " <listenPort> port to listen for browsers\n"
+ " <max> max number of SocketMangers in pool, " + "use neg. number\n"
+ " to use each SocketManager only once " + "(default: 1)\n"
+ " <i2cphost> host to connect to the router " + "(default: 127.0.0.1)\n"
+ " <i2cpport> port to connect to the router " + "(default: 7654)");
}
}

View File

@ -1,153 +0,0 @@
package net.i2p.httptunnel;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.StringReader;
import net.i2p.util.Log;
/**
* A HTTP request (GET or POST). This will be passed to a hander for
* handling it.
*/
public class Request {
private static final Log _log = new Log(Request.class);
// all strings are forced to be ISO-8859-1 encoding
private String method;
private String url;
private String proto;
private String params;
private String postData;
/**
* A constructor, creating a request from an InputStream
* @param in InputStream from which we "read-in" a Request
* @throws IOException
*/
public Request(InputStream in) throws IOException {
BufferedReader br = new BufferedReader(new InputStreamReader(in, "ISO-8859-1"));
String line = br.readLine();
if (line == null) { // no data at all
method = null;
_log.error("Connection but no data");
return;
}
int pos = line.indexOf(" ");
if (pos == -1) {
method = line;
url = "";
_log.error("Malformed HTTP request: " + line);
} else {
method = line.substring(0, pos);
url = line.substring(pos + 1);
}
proto = "";
pos = url.indexOf(" ");
if (pos != -1) {
proto = url.substring(pos); // leading space intended
url = url.substring(0, pos);
}
StringBuffer sb = new StringBuffer(512);
while ((line = br.readLine()) != null) {
if (line.length() == 0) break;
sb.append(line).append("\r\n");
}
params = sb.toString(); // no leading empty line!
sb = new StringBuffer();
// hack for POST requests, ripped from HttpClient
// this won't work for large POSTDATA
// FIXME: do this better, please.
if (!method.equals("GET")) {
while (br.ready()) { // empty the buffer (POST requests)
int i = br.read();
if (i != -1) {
sb.append((char) i);
}
}
postData = sb.toString();
} else {
postData = "";
}
}
/**
* @return A Request as an array of bytes of a String in ISO-8859-1 format
* @throws IOException
*/
public byte[] toByteArray() throws IOException {
if (method == null) return null;
return toISO8859_1String().getBytes("ISO-8859-1");
}
private String toISO8859_1String() throws IOException {
if (method == null) return null;
return method + " " + url + proto + "\r\n" + params + "\r\n" + postData;
}
/**
* @return the URL of the request
*/
public String getURL() {
return url;
}
/**
* Sets the URL of the Request
* @param newURL the new URL
*/
public void setURL(String newURL) {
url = newURL;
}
/**
* Retrieves the value of a param.
* @param name The name of the param
* @return The value of the param, or null
*/
public String getParam(String name) {
try {
BufferedReader br = new BufferedReader(new StringReader(params));
String line;
while ((line = br.readLine()) != null) {
if (line.startsWith(name)) { return line.substring(name.length()); }
}
return null;
} catch (IOException ex) {
_log.error("Error getting parameter", ex);
return null;
}
}
/**
* Sets the value of a param.
* @param name the name of the param
* @param value the value to be set
*/
public void setParam(String name, String value) {
try {
StringBuffer sb = new StringBuffer(params.length() + value.length());
BufferedReader br = new BufferedReader(new StringReader(params));
String line;
boolean replaced = false;
while ((line = br.readLine()) != null) {
if (line.startsWith(name)) {
replaced = true;
if (value == null) continue; // kill param
line = name + value;
}
sb.append(line).append("\r\n");
}
if (!replaced && value != null) {
sb.append(name).append(value).append("\r\n");
}
params = sb.toString();
} catch (IOException ex) {
_log.error("Error getting parameter", ex);
}
}
}

View File

@ -1,120 +0,0 @@
package net.i2p.httptunnel;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Properties;
import net.i2p.client.streaming.I2PSocketManager;
import net.i2p.client.streaming.I2PSocketManagerFactory;
/**
* Produces SocketManagers in a thread and gives them to those who
* need them.
*/
public class SocketManagerProducer extends Thread {
private ArrayList myManagers = new ArrayList();
private HashMap usedManagers = new HashMap();
private int port;
private String host;
private int maxManagers;
private boolean shouldThrowAwayManagers;
/**
* Public constructor creating a SocketManagerProducer
* @param initialManagers a list of socket managers to use
* @param maxManagers how many managers to have in the cache
* @param shouldThrowAwayManagers whether to throw away a manager after use
* @param host which host to listen on
* @param port which port to listen on
*/
public SocketManagerProducer(I2PSocketManager[] initialManagers, int maxManagers, boolean shouldThrowAwayManagers,
String host, int port) {
if (maxManagers < 1) { throw new IllegalArgumentException("maxManagers < 1"); }
this.host = host;
this.port = port;
this.shouldThrowAwayManagers = shouldThrowAwayManagers;
if (initialManagers != null) {
myManagers.addAll(Arrays.asList(initialManagers));
}
this.maxManagers = maxManagers;
this.shouldThrowAwayManagers = shouldThrowAwayManagers;
setDaemon(true);
start();
}
/**
* Thread producing new SocketManagers.
*/
public void run() {
while (true) {
synchronized (this) {
// without mcDonalds mode, we most probably need no
// new managers.
while (!shouldThrowAwayManagers && myManagers.size() == maxManagers) {
myWait();
}
}
// produce a new manager, regardless whether it is needed
// or not. Do not synchronized this part, since it can be
// quite time-consuming.
I2PSocketManager newManager = I2PSocketManagerFactory.createManager(host, port, new Properties());
// when done, check if it is needed.
synchronized (this) {
while (myManagers.size() == maxManagers) {
myWait();
}
myManagers.add(newManager);
notifyAll();
}
}
}
/**
* Get a manager for connecting to a given destination. Each
* destination will always get the same manager.
*
* @param dest the destination to connect to
* @return the SocketManager to use
*/
public synchronized I2PSocketManager getManager(String dest) {
I2PSocketManager result = (I2PSocketManager) usedManagers.get(dest);
if (result == null) {
result = getManager();
usedManagers.put(dest, result);
}
return result;
}
/**
* Get a "new" SocketManager. Depending on the anonymity settings,
* this can be a completely new one or one randomly selected from
* a pool.
*
* @return the SocketManager to use
*/
public synchronized I2PSocketManager getManager() {
while (myManagers.size() == 0) {
myWait(); // no manager here, so wait until one is produced
}
int which = (int) (Math.random() * myManagers.size());
I2PSocketManager result = (I2PSocketManager) myManagers.get(which);
if (shouldThrowAwayManagers) {
myManagers.remove(which);
notifyAll();
}
return result;
}
/**
* Wait until InterruptedException
*/
public void myWait() {
try {
wait();
} catch (InterruptedException ex) {
ex.printStackTrace();
}
}
}

View File

@ -1,61 +0,0 @@
package net.i2p.httptunnel.filter;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.util.Collection;
import java.util.Iterator;
import net.i2p.util.Log;
/**
* Chain multiple filters. Decorator pattern...
*/
public class ChainFilter implements Filter {
private static final Log _log = new Log(ChainFilter.class);
private Collection filters; // perhaps protected?
/**
* @param filters A collection (list) of filters to chain to
*/
public ChainFilter(Collection filters) {
this.filters = filters;
}
/* (non-Javadoc)
* @see net.i2p.httptunnel.filter.Filter#filter(byte[])
*/
public byte[] filter(byte[] toFilter) {
byte[] buf = toFilter;
for (Iterator it = filters.iterator(); it.hasNext();) {
Filter f = (Filter) it.next();
buf = f.filter(buf);
}
return buf;
}
/* (non-Javadoc)
* @see net.i2p.httptunnel.filter.Filter#finish()
*/
public byte[] finish() {
// this is a bit complicated. Think about it...
try {
byte[] buf = EMPTY;
for (Iterator it = filters.iterator(); it.hasNext();) {
Filter f = (Filter) it.next();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
if (buf.length != 0) {
baos.write(f.filter(buf));
}
baos.write(f.finish());
buf = baos.toByteArray();
}
return buf;
} catch (IOException ex) {
_log.error("Error chaining filters", ex);
return EMPTY;
}
}
}

View File

@ -1,25 +0,0 @@
package net.i2p.httptunnel.filter;
/**
* A generic filtering interface.
*/
public interface Filter {
/**
* An empty byte array.
*/
public static final byte[] EMPTY = new byte[0];
/**
* Filter some data. Not all filtered data need to be returned.
* @param toFilter the bytes that are to be filtered.
* @return the filtered data
*/
public byte[] filter(byte[] toFilter);
/**
* Data stream has finished. Return all of the rest data.
* @return the rest of the data
*/
public byte[] finish();
}

View File

@ -1,21 +0,0 @@
package net.i2p.httptunnel.filter;
/**
* A filter letting everything pass as is.
*/
public class NullFilter implements Filter {
/* (non-Javadoc)
* @see net.i2p.httptunnel.filter.Filter#filter(byte[])
*/
public byte[] filter(byte[] toFilter) {
return toFilter;
}
/* (non-Javadoc)
* @see net.i2p.httptunnel.filter.Filter#finish()
*/
public byte[] finish() {
return EMPTY;
}
}

View File

@ -1,113 +0,0 @@
package net.i2p.httptunnel.handler;
import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.SocketException;
import net.i2p.I2PAppContext;
import net.i2p.I2PException;
import net.i2p.client.streaming.I2PSocket;
import net.i2p.client.streaming.I2PSocketManager;
import net.i2p.client.streaming.I2PSocketOptions;
import net.i2p.data.Destination;
import net.i2p.httptunnel.HTTPListener;
import net.i2p.httptunnel.Request;
import net.i2p.httptunnel.SocketManagerProducer;
import net.i2p.httptunnel.filter.Filter;
import net.i2p.httptunnel.filter.NullFilter;
import net.i2p.util.Log;
/**
* Handler for browsing Eepsites.
*/
public class EepHandler {
private static final Log _log = new Log(EepHandler.class);
private static I2PAppContext _context = new I2PAppContext();
protected ErrorHandler errorHandler;
/* package private */EepHandler(ErrorHandler eh) {
errorHandler = eh;
}
/**
* @param req the Request
* @param httpl an HTTPListener
* @param out where to write the results
* @param destination destination as a string, (subject to naming
* service lookup)
* @throws IOException
*/
public void handle(Request req, HTTPListener httpl, OutputStream out,
/* boolean fromProxy, */String destination) throws IOException {
SocketManagerProducer smp = httpl.getSMP();
Destination dest = _context.namingService().lookup(destination);
if (dest == null) {
errorHandler.handle(req, httpl, out, "Could not lookup host: " + destination);
return;
}
I2PSocketManager sm = smp.getManager(destination);
Filter f = new NullFilter(); //FIXME: use other filter
req.setParam("Host: ", dest.toBase64());
if (!handle(req, f, out, dest, sm)) {
errorHandler.handle(req, httpl, out, "Unable to reach peer");
}
}
/**
* @param req the Request to send out
* @param f a Filter to apply to the bytes retrieved from the Destination
* @param out where to write the results
* @param dest the Destination of the Request
* @param sm an I2PSocketManager, to get a socket for the Destination
* @return boolean, true if something was written, false otherwise.
* @throws IOException
*/
public boolean handle(Request req, Filter f, OutputStream out, Destination dest,
I2PSocketManager sm) throws IOException {
I2PSocket s = null;
boolean written = false;
try {
synchronized (sm) {
s = sm.connect(dest, new I2PSocketOptions());
}
InputStream in = new BufferedInputStream(s.getInputStream());
OutputStream sout = new BufferedOutputStream(s.getOutputStream());
sout.write(req.toByteArray());
sout.flush();
byte[] buffer = new byte[16384], filtered;
int len;
while ((len = in.read(buffer)) != -1) {
if (len != buffer.length) {
byte[] b2 = new byte[len];
System.arraycopy(buffer, 0, b2, 0, len);
filtered = f.filter(b2);
} else {
filtered = f.filter(buffer);
}
written = true;
out.write(filtered);
}
filtered = f.finish();
written = true;
out.write(filtered);
out.flush();
} catch (SocketException ex) {
_log.error("Error while handling eepsite request");
return written;
} catch (IOException ex) {
_log.error("Error while handling eepsite request");
return written;
} catch (I2PException ex) {
_log.error("Error while handling eepsite request");
return written;
} finally {
if (s != null) s.close();
}
return true;
}
}

View File

@ -1,41 +0,0 @@
package net.i2p.httptunnel.handler;
import java.io.IOException;
import java.io.OutputStream;
import net.i2p.httptunnel.HTTPListener;
import net.i2p.httptunnel.Request;
import net.i2p.util.Log;
/**
* Handler for general error messages.
*/
public class ErrorHandler {
private static final Log _log = new Log(ErrorHandler.class); /* UNUSED */
/* package private */ErrorHandler() {
}
/**
* @param req the Request
* @param httpl an HTTPListener
* @param out where to write the results
* @param error the error that happened
* @throws IOException
*/
public void handle(Request req, HTTPListener httpl, OutputStream out, String error) throws IOException {
// FIXME: Make nicer messages for more likely errors.
out
.write(("HTTP/1.1 500 Internal Server Error\r\n" + "Content-Type: text/html; charset=iso-8859-1\r\n\r\n")
.getBytes("ISO-8859-1"));
out
.write(("<html><head><title>" + error + "</title></head><body><h1>" + error
+ "</h1>An internal error occurred while " + "handling a request by HTTPTunnel:<br><b>" + error + "</b><h2>Complete request:</h2><b>---</b><br><i><pre>\r\n")
.getBytes("ISO-8859-1"));
out.write(req.toByteArray());
out.write(("</pre></i><br><b>---</b></body></html>").getBytes("ISO-8859-1"));
out.flush();
}
}

View File

@ -1,67 +0,0 @@
package net.i2p.httptunnel.handler;
import java.io.IOException;
import java.io.OutputStream;
import net.i2p.httptunnel.HTTPListener;
import net.i2p.httptunnel.Request;
import net.i2p.util.Log;
/**
* Handler for requests that do not require any connection to anyone
* (except errors).
*/
public class LocalHandler {
private static final Log _log = new Log(LocalHandler.class); /* UNUSED */
/* package private */LocalHandler() {
}
/**
* @param req the Request
* @param httpl an HTTPListener
* @param out where to write the results
* @throws IOException
*/
public void handle(Request req, HTTPListener httpl, OutputStream out
/*, boolean fromProxy */) throws IOException {
//FIXME: separate multiple pages, not only a start page
//FIXME: provide some info on this page
out
.write(("HTTP/1.1 200 Document following\r\n" + "Content-Type: text/html; charset=iso-8859-1\r\n\r\n"
+ "<html><head><title>Welcome to I2P HTTPTunnel</title>"
+ "</head><body><h1>Welcome to I2P HTTPTunnel</h1>You can "
+ "browse Eepsites by adding an eepsite name to the request." + "</body></html>")
.getBytes("ISO-8859-1"));
out.flush();
}
/**
* Currently always throws an IO Exception
* @param req the Request
* @param httpl an HTTPListener
* @param out where to write the results
* @throws IOException
*/
public void handleProxyConfWarning(Request req, HTTPListener httpl, OutputStream out) throws IOException {
//FIXME
throw new IOException("jrandom ate the deprecated method. mooo");
//httpl.handleNotImplemented(out);
}
/**
* Currently always throws an IO Exception
* @param req the Request
* @param httpl an HTTPListener
* @param out where to write the results
* @throws IOException
*/
public void handleHTTPWarning(Request req, HTTPListener httpl, OutputStream out /*, boolean fromProxy */)
throws IOException {
// FIXME
throw new IOException("jrandom ate the deprecated method. mooo");
//httpl.handleNotImplemented(out);
}
}

View File

@ -1,54 +0,0 @@
package net.i2p.httptunnel.handler;
import java.io.IOException;
import java.io.OutputStream;
import net.i2p.I2PAppContext;
import net.i2p.client.streaming.I2PSocketManager;
import net.i2p.data.Destination;
import net.i2p.httptunnel.HTTPListener;
import net.i2p.httptunnel.Request;
import net.i2p.httptunnel.SocketManagerProducer;
import net.i2p.httptunnel.filter.Filter;
import net.i2p.httptunnel.filter.NullFilter;
import net.i2p.util.Log;
/**
* Handler for proxying "normal" HTTP requests.
*/
public class ProxyHandler extends EepHandler {
private static final Log _log = new Log(ErrorHandler.class); /* UNUSED */
private static I2PAppContext _context = new I2PAppContext();
/* package private */ProxyHandler(ErrorHandler eh) {
super(eh);
}
/**
* @param req a Request
* @param httpl an HTTPListener
* @param out where to write the results
* @throws IOException
*/
public void handle(Request req, HTTPListener httpl, OutputStream out
/*, boolean fromProxy */) throws IOException {
SocketManagerProducer smp = httpl.getSMP();
Destination dest = findProxy();
if (dest == null) {
errorHandler.handle(req, httpl, out, "Could not find proxy");
return;
}
// one manager for all proxy requests
I2PSocketManager sm = smp.getManager("--proxy--");
Filter f = new NullFilter(); //FIXME: use other filter
if (!handle(req, f, out, dest, sm)) {
errorHandler.handle(req, httpl, out, "Unable to reach peer");
}
}
private Destination findProxy() {
//FIXME!
return _context.namingService().lookup("squid.i2p");
}
}

View File

@ -1,116 +0,0 @@
package net.i2p.httptunnel.handler;
import java.io.IOException;
import java.io.OutputStream;
import net.i2p.httptunnel.HTTPListener;
import net.i2p.httptunnel.Request;
import net.i2p.util.Log;
/**
* Main handler for all requests. Dispatches requests to other handlers.
*/
public class RootHandler {
private static final Log _log = new Log(RootHandler.class); /* UNUSED */
private RootHandler() {
errorHandler = new ErrorHandler();
localHandler = new LocalHandler();
proxyHandler = new ProxyHandler(errorHandler);
eepHandler = new EepHandler(errorHandler);
}
private ErrorHandler errorHandler;
private ProxyHandler proxyHandler;
private LocalHandler localHandler;
private EepHandler eepHandler;
private static RootHandler instance;
/**
* Singleton stuff
* @return the one and only instance, yay!
*/
public static synchronized RootHandler getInstance() {
if (instance == null) {
instance = new RootHandler();
}
return instance;
}
/**
* The _ROOT_ handler: it passes its workload off to the other handlers.
* @param req a Request
* @param httpl an HTTPListener
* @param out where to write the results
* @throws IOException
*/
public void handle(Request req, HTTPListener httpl, OutputStream out) throws IOException {
String url = req.getURL();
System.out.println(url);
/* boolean byProxy = false; */
int pos;
if (url.startsWith("http://")) { // access via proxy
/* byProxy=true; */
if (httpl.firstProxyUse()) {
localHandler.handleProxyConfWarning(req, httpl, out);
return;
}
url = url.substring(7);
pos = url.indexOf("/");
String host;
if (pos == -1) {
errorHandler.handle(req, httpl, out, "No host end in URL");
return;
}
host = url.substring(0, pos);
url = url.substring(pos);
if ("i2p".equals(host) || "i2p.i2p".equals(host)) {
// normal request; go on below...
} else if (host.endsWith(".i2p")) {
// "old" service request, send a redirect...
out.write(("HTTP/1.1 302 Moved\r\nLocation: " + "http://i2p.i2p/" + host + url + "\r\n\r\n").getBytes("ISO-8859-1"));
return;
} else {
// this is for proxying to the real web
proxyHandler.handle(req, httpl, out /*, true */);
return;
}
}
if (url.equals("/")) { // main page
url = "/_/local/index";
} else if (!url.startsWith("/")) {
errorHandler.handle(req, httpl, out, "No leading slash in URL: " + url);
return;
}
String dest;
url = url.substring(1);
pos = url.indexOf("/");
if (pos == -1) {
dest = url;
url = "/";
} else {
dest = url.substring(0, pos);
url = url.substring(pos);
}
req.setURL(url);
if (dest.equals("_")) { // no eepsite
if (url.startsWith("/local/")) { // local request
req.setURL(url.substring(6));
localHandler.handle(req, httpl, out /*, byProxy */);
} else if (url.startsWith("/http/")) { // http warning
localHandler.handleHTTPWarning(req, httpl, out /*, byProxy */);
} else if (url.startsWith("/proxy/")) { // http proxying
req.setURL("http://" + url.substring(7));
proxyHandler.handle(req, httpl, out /*, byProxy */);
} else {
errorHandler.handle(req, httpl, out, "No local handler for this URL: " + url);
}
} else {
eepHandler.handle(req, httpl, out, /* byProxy, */dest);
}
}
}

View File

@ -2,7 +2,7 @@
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.
59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
@ -276,3 +276,65 @@ TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, the commands you use may
be called something other than `show w' and `show c'; they could even be
mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
`Gnomovision' (which makes passes at compilers) written by James Hacker.
<signature of Ty Coon>, 1 April 1989
Ty Coon, President of Vice
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library. If this is what you want to do, use the GNU Library General
Public License instead of this License.

24
apps/i2psnark/TODO Normal file
View File

@ -0,0 +1,24 @@
- I2PSnark:
- add multitorrent support by checking the metainfo hash in the
PeerAcceptor and feeding it off to the appropriate coordinator
- add a web interface
- BEncode
- Byte array length indicator can overflow.
- Support really big BigNums (only 256 chars allowed now)
- Better BEValue toString(). Uses stupid heuristic now for debugging.
- Implemented bencoding.
- Remove application level hack to calculate sha1 hash for metainfo
(But can it be done as efficiently?)
- Storage
- Check file name filter.
- TrackerClient
- Support undocumented &numwant= request.
- PeerCoordinator
- Disconnect from other seeds as soon as you are a seed yourself.
- Text UI
- Make it completely silent.

View File

@ -0,0 +1 @@
Mark Wielaard <mark@klomp.org>

View File

@ -0,0 +1,487 @@
2003-06-27 14:24 Mark Wielaard <mark@klomp.org>
* README: Update version number and explain new features.
2003-06-27 13:51 Mark Wielaard <mark@klomp.org>
* Makefile, org/klomp/snark/GnomeInfoWindow.java,
org/klomp/snark/GnomePeerList.java,
org/klomp/snark/PeerCoordinator.java,
org/klomp/snark/SnarkGnome.java: Add GnomeInfoWindow.
2003-06-27 00:37 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/Snark.java: Implement 'info' and 'list' commands.
2003-06-27 00:05 Mark Wielaard <mark@klomp.org>
* Makefile, org/klomp/snark/GnomePeerList.java,
org/klomp/snark/SnarkGnome.java: Add GnomePeerList to show state of
connected peers.
2003-06-27 00:04 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/: Peer.java, PeerID.java: Make Comparable.
2003-06-23 23:32 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/PeerMonitorTask.java: Correctly update
lastDownloaded and lastUploaded.
2003-06-23 23:20 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/Snark.java: When checking storage use the
MetaInfo from the storage.
2003-06-23 21:47 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/Storage.java: Fill piece hashes, not info hashes.
2003-06-23 21:42 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/MetaInfo.java: New package private
getPieceHashes() method.
2003-06-22 19:49 Mark Wielaard <mark@klomp.org>
* README, TODO, org/klomp/snark/Snark.java: Add new command line
switch --no-commands. Don't read interactive commands or show
usage info.
2003-06-22 19:26 Mark Wielaard <mark@klomp.org>
* Makefile, org/klomp/snark/PeerCheckerTask.java,
org/klomp/snark/PeerMonitorTask.java, org/klomp/snark/Snark.java:
Split peer statistic reporting from PeerCheckerTask into
PeerMonitorTask. Use new task in Snark text ui.
2003-06-22 18:32 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/Snark.java: Only print peer id when debug level
is INFO or higher.
2003-06-22 18:00 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/ShutdownListener.java: Add new ShutdownListener
interface.
2003-06-22 17:18 Mark Wielaard <mark@klomp.org>
* TODO: Text UI item to not read from stdin.
2003-06-22 17:18 Mark Wielaard <mark@klomp.org>
* snark-gnome.sh: kaffe java-gnome support (but crashes hard at the
moment).
2003-06-22 14:04 Mark Wielaard <mark@klomp.org>
* Makefile, org/klomp/snark/CoordinatorListener.java,
org/klomp/snark/PeerCoordinator.java,
org/klomp/snark/ProgressListener.java, org/klomp/snark/Snark.java,
org/klomp/snark/SnarkGnome.java,
org/klomp/snark/SnarkShutdown.java, org/klomp/snark/Storage.java,
org/klomp/snark/StorageListener.java: Split ProgressListener into
Storage, Coordinator and Shutdown listener.
2003-06-20 19:06 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/: PeerCoordinator.java, Snark.java,
SnarkGnome.java, Storage.java: Progress listeners for both Storage
and PeerCoordinator.
2003-06-20 14:50 Mark Wielaard <mark@klomp.org>
* Makefile, org/klomp/snark/PeerCoordinator.java,
org/klomp/snark/ProgressListener.java,
org/klomp/snark/SnarkGnome.java: Add ProgressListener.
2003-06-20 13:22 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/SnarkGnome.java: Add Pieces collected field.
2003-06-20 12:26 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/: PeerCoordinator.java, PeerListener.java,
PeerState.java: Add PeerListener.downloaded() which gets called on
chunk updates. Keep PeerCoordinator.downloaded up to date using
this remove adjusting in gotPiece() except when we receive a bad
piece.
2003-06-16 00:27 Mark Wielaard <mark@klomp.org>
* Makefile, snark-gnome.sh, org/klomp/snark/Snark.java,
org/klomp/snark/SnarkGnome.java: Start of a Gnome GUI.
2003-06-05 13:19 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/PeerCoordinator.java: Don't remove a BAD piece
from the wantedPieces list. Revert to synchronizing on
wantedPieces for all relevant sections.
2003-06-03 21:09 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/Snark.java: Only call readLine() when !quit.
Always print exception when fatal() is called.
2003-06-01 23:12 Mark Wielaard <mark@klomp.org>
* README: Set release version to 0.4.
2003-06-01 22:59 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/PeerConnectionIn.java: Handle negative length
prefixes (terminates connection).
2003-06-01 21:34 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/: Snark.java, SnarkShutdown.java: Implement
correct shutdown and read commands from stdin.
2003-06-01 21:34 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/TrackerInfo.java: Check that interval and peers
list actually exist.
2003-06-01 21:33 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/Storage.java: Implement close().
2003-06-01 21:05 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/PeerState.java: Fix debug logging.
2003-06-01 20:55 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/PeerCoordinator.java: Implement halt().
2003-06-01 20:55 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/ConnectionAcceptor.java: Rename stop() to halt().
2003-06-01 17:35 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/PeerState.java: Drop lock on this when calling
addRequest() from havePiece().
2003-06-01 14:46 Mark Wielaard <mark@klomp.org>
* README, org/klomp/snark/ConnectionAcceptor.java,
org/klomp/snark/HttpAcceptor.java, org/klomp/snark/Peer.java,
org/klomp/snark/PeerCheckerTask.java,
org/klomp/snark/PeerConnectionIn.java,
org/klomp/snark/PeerConnectionOut.java,
org/klomp/snark/PeerCoordinator.java,
org/klomp/snark/PeerState.java, org/klomp/snark/Snark.java,
org/klomp/snark/SnarkShutdown.java, org/klomp/snark/Storage.java,
org/klomp/snark/Tracker.java, org/klomp/snark/TrackerClient.java:
Add debug/log level.
2003-05-31 23:04 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/: PeerCheckerTask.java, PeerCoordinator.java: Use
just one lock (peers) for all synchronization (even for
wantedPieces). Let PeerChecker handle real disconnect and keep
count of uploaders.
2003-05-31 22:29 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/: Peer.java, PeerConnectionIn.java: Set state to
null on first disconnect() call. So always check whether it might
already be null. Helps disconnect check.
2003-05-31 22:27 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/PeerConnectionOut.java: Don't explicitly close
the DataOutputStream (if another thread is using it libgcj seems to
not like it very much).
2003-05-30 21:33 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/PeerConnectionOut.java: Cancel
(un)interested/(un)choke when (inverse) is still in send queue.
Remove pieces from send queue when choke message is actaully send.
2003-05-30 19:32 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/PeerState.java: Make sure listener.wantPiece(int)
is never called while lock on this is held.
2003-05-30 19:00 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/PeerConnectionOut.java: Indentation cleanup.
2003-05-30 17:50 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/Storage.java: Only synchronize on bitfield as
long as necessary.
2003-05-30 17:43 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/Tracker.java: Identing cleanup.
2003-05-30 16:32 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/PeerState.java: Better error message.
2003-05-30 15:11 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/PeerState.java: Make sure not to hold the lock on
this when calling the listener to prevent deadlocks. Implement
handling and sending of cancel messages.
2003-05-30 14:50 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/PeerCoordinator.java: First check if we still
want a piece before trying to add it to the Storage.
2003-05-30 14:49 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/PeerConnectionOut.java: Implement
sendCancel(Request). Add cancelRequest(int, int, int).
2003-05-30 14:46 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/Request.java: Add hashCode() and equals(Object)
methods.
2003-05-30 14:45 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/Peer.java: Fix wheter -> whether javadoc
comments. Mark state null immediatly after calling
listener.disconnected(). Call PeerState.havePiece() not
PeerConnectionOut.sendHave() directly.
2003-05-25 19:23 Mark Wielaard <mark@klomp.org>
* TODO: Add PeerCoordinator TODO for connecting to seeds.
2003-05-23 12:12 Mark Wielaard <mark@klomp.org>
* Makefile: Create class files with jikes again.
2003-05-18 22:01 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/: PeerCheckerTask.java, PeerCoordinator.java:
Prefer to (optimistically) unchoke first those peers that unchoked
us. And make sure to not unchoke a peer that we just choked.
2003-05-18 21:48 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/Peer.java: Fix isChoked() to not always return
true.
2003-05-18 14:46 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/: Peer.java, PeerCheckerTask.java,
PeerCoordinator.java, PeerState.java: Remove separate Peer
downloading/uploading states. Keep choke and interest always up to
date. Uploading is now just when we are not choking the peer.
Downloading is now defined as being unchoked and interesting.
CHECK_PERIOD is now 20 seconds. MAX_CONNECTIONS is now 24.
MAX_DOWNLOADERS doesn't exists anymore. We download whenever we can
from peers.
2003-05-18 13:57 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/PeerConnectionOut.java: Remove piece messages
from queue when we are choking. (They will have to be rerequested
when we unchoke the peer again.)
2003-05-15 00:08 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/PeerState.java: Ignore missed chunk requests,
don't requeue them.
2003-05-15 00:06 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/Request.java: Add sanity check
2003-05-10 15:47 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/Snark.java: Add extra '(' to usage message.
2003-05-10 15:22 Mark Wielaard <mark@klomp.org>
* README: Set version to 0.3 (The Bakers Tale).
2003-05-10 15:17 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/PeerState.java: Mention received piece in warning
message.
2003-05-10 03:20 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/: PeerConnectionIn.java, PeerState.java,
Request.java: Remove currentRequest and handle all piece messages
from the lastRequested list.
2003-05-09 20:02 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/PeerState.java: Fix nothing requested warning
message.
2003-05-09 19:59 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/PeerConnectionOut.java: Piece messages are big.
So if there are other (control) messages make sure they are send
first. Also remove request messages from the queue if we are
currently being choked to prevent them from being send even if we
get unchoked a little later. (Since we will resent them anyway in
that case.)
2003-05-09 18:33 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/: Peer.java, PeerCheckerTask.java,
PeerCoordinator.java, PeerID.java: New definition of PeerID.equals
(port + address + id) and new method PeerID.sameID (only id). These
are used to really see if we already have a connection to a certain
peer (active setup vs passive setup).
2003-05-08 03:05 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/PeerState.java: Use Snark.debug() not
System.out.println().
2003-05-06 20:29 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/PeerState.java: s/noting/nothing/
2003-05-06 20:28 Mark Wielaard <mark@klomp.org>
* Makefile: s/lagacy/legacy/
2003-05-05 23:17 Mark Wielaard <mark@klomp.org>
* README: Set version to 0.2, explain new functionality and add
examples.
2003-05-05 22:42 Mark Wielaard <mark@klomp.org>
* .cvsignore, Makefile, org/klomp/snark/StaticSnark.java: Enable
-static binary creation.
2003-05-05 22:42 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/Tracker.java: Disable --ip support.
2003-05-05 21:02 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/: HttpAcceptor.java, PeerCheckerTask.java,
PeerCoordinator.java, TrackerClient.java: Use Snark.debug() not
System.out.println().
2003-05-05 21:01 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/PeerConnectionIn.java: Be prepared to handle the
case where currentRequest is null.
2003-05-05 21:00 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/Snark.java: Improve argument parsing errors.
2003-05-05 21:00 Mark Wielaard <mark@klomp.org>
* Makefile: Use gcj -C again for creating the class files.
2003-05-05 09:24 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/PeerState.java: Just clear outstandingRequests,
never make it null.
2003-05-05 02:55 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/TrackerClient.java: Always retry both first
started event and every other event as long the TrackerClient is
not stopped.
2003-05-05 02:54 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/Snark.java: Remove double assignment port.
2003-05-05 02:54 Mark Wielaard <mark@klomp.org>
* TODO: Add Tracker TODO item.
2003-05-04 23:38 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/: ConnectionAcceptor.java, MetaInfo.java,
Snark.java, Storage.java, Tracker.java: Add info hash calcultation
to MetaInfo. Add torrent creation to Storage. Add ip parameter
handling to Tracker. Make ConnectionAcceptor handle
null/non-existing HttpAcceptors. Add debug output, --ip handling
and all the above to Snark.
2003-05-04 23:36 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/TrackerClient.java: Handle all failing requests
the same (print a warning).
2003-05-03 15:46 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/: Peer.java, PeerID.java, TrackerInfo.java: Split
Peer and PeerID a little more.
2003-05-03 15:44 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/MetaInfo.java: Add reannounce() and
getTorrentData().
2003-05-03 15:38 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/: PeerCheckerTask.java, PeerCoordinator.java:
More concise verbose/debug output. Always use addUpDownloader() to
set peers upload or download state to true.
2003-05-03 13:38 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/TrackerClient.java: Compile fixes.
2003-05-03 13:32 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/TrackerClient.java: Only generate fatal() call on
first Tracker access. Otherwise just print a warning error message.
2003-05-03 03:10 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/PeerState.java: Better handle resending
outstanding pieces and try to recover better from unrequested
pieces.
2003-05-02 21:33 Mark Wielaard <mark@klomp.org>
* Makefile, org/klomp/snark/HttpAcceptor.java,
org/klomp/snark/MetaInfo.java, org/klomp/snark/PeerID.java,
org/klomp/snark/Snark.java, org/klomp/snark/Tracker.java,
org/klomp/snark/TrackerClient.java,
org/klomp/snark/bencode/BEncoder.java: Add Tracker, PeerID and
BEncoder.
2003-05-01 20:17 Mark Wielaard <mark@klomp.org>
* Makefile, org/klomp/snark/ConnectionAcceptor.java,
org/klomp/snark/HttpAcceptor.java, org/klomp/snark/Peer.java,
org/klomp/snark/PeerAcceptor.java, org/klomp/snark/Snark.java: Add
ConnectionAcceptor that handles both PeerAcceptor and HttpAcceptor.
2003-05-01 18:39 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/PeerCoordinator.java: connected() synchronize on
peers.
2003-04-28 02:56 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/SnarkShutdown.java: Wait some time before
returning...
2003-04-28 02:56 Mark Wielaard <mark@klomp.org>
* TODO: More items.
2003-04-28 02:56 Mark Wielaard <mark@klomp.org>
* org/klomp/snark/Snark.java: Calculate real random ID.
2003-04-27 Mark Wielaard <mark@klomp.org>
* snark: Initial (0.1) version.

View File

@ -0,0 +1,93 @@
<?xml version="1.0" encoding="UTF-8"?>
<project basedir="." default="all" name="i2psnark">
<target name="all" depends="clean, build" />
<target name="build" depends="builddep, jar, war" />
<target name="builddep">
<ant dir="../../jetty/" target="build" />
<ant dir="../../streaming/java/" target="build" />
<!-- streaming will build ministreaming and core -->
</target>
<target name="compile">
<mkdir dir="./build" />
<mkdir dir="./build/obj" />
<javac
srcdir="./src"
debug="true" deprecation="on" source="1.3" target="1.3"
destdir="./build/obj"
classpath="../../../core/java/build/i2p.jar:../../../router/java/build/router.jar:../../jetty/jettylib/org.mortbay.jetty.jar:../../jetty/jettylib/javax.servlet.jar:../../ministreaming/java/build/mstreaming.jar" />
</target>
<target name="jar" depends="builddep, compile">
<jar destfile="./build/i2psnark.jar" basedir="./build/obj" includes="**/*.class" excludes="**/*Servlet.class">
<manifest>
<attribute name="Main-Class" value="org.klomp.snark.Snark" />
<attribute name="Class-Path" value="i2p.jar mstreaming.jar streaming.jar" />
</manifest>
</jar>
</target>
<target name="war" depends="jar">
<war destfile="../i2psnark.war" webxml="../web.xml">
<classes dir="./build/obj" includes="**/*" />
</war>
</target>
<target name="standalone" depends="standalone_prep">
<zip destfile="i2psnark-standalone.zip">
<zipfileset dir="./dist/" prefix="i2psnark/" />
</zip>
</target>
<target name="standalone_prep" depends="war">
<javac debug="true" deprecation="on" source="1.3" target="1.3"
destdir="./build" srcdir="src/" includes="org/klomp/snark/web/RunStandalone.java" >
<classpath>
<pathelement location="../../jetty/jettylib/commons-logging.jar" />
<pathelement location="../../jetty/jettylib/commons-el.jar" />
<pathelement location="../../jetty/jettylib/org.mortbay.jetty.jar" />
<pathelement location="../../jetty/jettylib/javax.servlet.jar" />
<pathelement location="../../../core/java/build/i2p.jar" />
</classpath>
</javac>
<jar destfile="./build/launch-i2psnark.jar" basedir="./build/" includes="org/klomp/snark/web/RunStandalone.class">
<manifest>
<attribute name="Main-Class" value="org.klomp.snark.web.RunStandalone" />
<attribute name="Class-Path" value="lib/i2p.jar lib/mstreaming.jar lib/streaming.jar lib/commons-el.jar lib/commons-logging.jar lib/jasper-compiler.jar lib/jasper-runtime.jar lib/javax.servlet.jar lib/org.mortbay.jetty.jar" />
</manifest>
</jar>
<delete dir="./dist" />
<mkdir dir="./dist" />
<copy file="./build/launch-i2psnark.jar" tofile="./dist/launch-i2psnark.jar" />
<mkdir dir="./dist/webapps" />
<copy file="../i2psnark.war" tofile="./dist/webapps/i2psnark.war" />
<mkdir dir="./dist/lib" />
<copy file="../../../core/java/build/i2p.jar" tofile="./dist/lib/i2p.jar" />
<copy file="../../jetty/jettylib/commons-el.jar" tofile="./dist/lib/commons-el.jar" />
<copy file="../../jetty/jettylib/commons-logging.jar" tofile="./dist/lib/commons-logging.jar" />
<copy file="../../jetty/jettylib/javax.servlet.jar" tofile="./dist/lib/javax.servlet.jar" />
<copy file="../../jetty/jettylib/org.mortbay.jetty.jar" tofile="./dist/lib/org.mortbay.jetty.jar" />
<copy file="../../jetty/jettylib/jasper-compiler.jar" tofile="./dist/lib/jasper-compiler.jar" />
<copy file="../../jetty/jettylib/jasper-runtime.jar" tofile="./dist/lib/jasper-runtime.jar" />
<copy file="../../ministreaming/java/build/mstreaming.jar" tofile="./dist/lib/mstreaming.jar" />
<copy file="../../streaming/java/build/streaming.jar" tofile="./dist/lib/streaming.jar" />
<copy file="../jetty-i2psnark.xml" tofile="./dist/jetty-i2psnark.xml" />
<copy file="../readme-standalone.txt" tofile="./dist/readme.txt" />
<mkdir dir="./dist/work" />
<mkdir dir="./dist/logs" />
<zip destfile="i2psnark-standalone.zip">
<zipfileset dir="./dist/" prefix="i2psnark/" />
</zip>
</target>
<target name="clean">
<delete dir="./build" />
<delete file="../i2psnark.war" />
<delete file="./i2psnark-standalone.zip" />
</target>
<target name="cleandep" depends="clean">
<ant dir="../../ministreaming/java/" target="distclean" />
</target>
<target name="distclean" depends="clean">
<ant dir="../../ministreaming/java/" target="distclean" />
</target>
</project>

View File

@ -0,0 +1,156 @@
/* BitField - Container of a byte array representing set and unset bits.
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark;
/**
* Container of a byte array representing set and unset bits.
*/
public class BitField
{
private final byte[] bitfield;
private final int size;
private int count;
/**
* Creates a new BitField that represents <code>size</code> unset bits.
*/
public BitField(int size)
{
this.size = size;
int arraysize = ((size-1)/8)+1;
bitfield = new byte[arraysize];
this.count = 0;
}
/**
* Creates a new BitField that represents <code>size</code> bits
* as set by the given byte array. This will make a copy of the array.
* Extra bytes will be ignored.
*
* @exception ArrayOutOfBoundsException if give byte array is not large
* enough.
*/
public BitField(byte[] bitfield, int size)
{
this.size = size;
int arraysize = ((size-1)/8)+1;
this.bitfield = new byte[arraysize];
// XXX - More correct would be to check that unused bits are
// cleared or clear them explicitly ourselves.
System.arraycopy(bitfield, 0, this.bitfield, 0, arraysize);
this.count = 0;
for (int i = 0; i < size; i++)
if (get(i))
this.count++;
}
/**
* This returns the actual byte array used. Changes to this array
* effect this BitField. Note that some bits at the end of the byte
* array are supposed to be always unset if they represent bits
* bigger then the size of the bitfield.
*/
public byte[] getFieldBytes()
{
return bitfield;
}
/**
* Return the size of the BitField. The returned value is one bigger
* then the last valid bit number (since bit numbers are counted
* from zero).
*/
public int size()
{
return size;
}
/**
* Sets the given bit to true.
*
* @exception IndexOutOfBoundsException if bit is smaller then zero
* bigger then size (inclusive).
*/
public void set(int bit)
{
if (bit < 0 || bit >= size)
throw new IndexOutOfBoundsException(Integer.toString(bit));
int index = bit/8;
int mask = 128 >> (bit % 8);
if ((bitfield[index] & mask) == 0) {
count++;
bitfield[index] |= mask;
}
}
/**
* Return true if the bit is set or false if it is not.
*
* @exception IndexOutOfBoundsException if bit is smaller then zero
* bigger then size (inclusive).
*/
public boolean get(int bit)
{
if (bit < 0 || bit >= size)
throw new IndexOutOfBoundsException(Integer.toString(bit));
int index = bit/8;
int mask = 128 >> (bit % 8);
return (bitfield[index] & mask) != 0;
}
/**
* Return the number of set bits.
*/
public int count()
{
return count;
}
/**
* Return true if all bits are set.
*/
public boolean complete()
{
return count >= size;
}
public String toString()
{
// Not very efficient
StringBuffer sb = new StringBuffer("BitField(");
sb.append(size).append(")[");
for (int i = 0; i < size; i++)
if (get(i))
{
sb.append(' ');
sb.append(i);
}
sb.append(" ]");
return sb.toString();
}
}

View File

@ -0,0 +1,189 @@
/* ConnectionAcceptor - Accepts connections and routes them to sub-acceptors.
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark;
import java.io.BufferedInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import net.i2p.I2PException;
import net.i2p.client.streaming.I2PServerSocket;
import net.i2p.client.streaming.I2PSocket;
import net.i2p.util.I2PThread;
import net.i2p.util.Log;
/**
* Accepts connections on a TCP port and routes them to sub-acceptors.
*/
public class ConnectionAcceptor implements Runnable
{
private static final ConnectionAcceptor _instance = new ConnectionAcceptor();
public static final ConnectionAcceptor instance() { return _instance; }
private Log _log = new Log(ConnectionAcceptor.class);
private I2PServerSocket serverSocket;
private PeerAcceptor peeracceptor;
private Thread thread;
private boolean stop;
private boolean socketChanged;
private ConnectionAcceptor() {}
public synchronized void startAccepting(PeerCoordinatorSet set, I2PServerSocket socket) {
if (serverSocket != socket) {
if ( (peeracceptor == null) || (peeracceptor.coordinators != set) )
peeracceptor = new PeerAcceptor(set);
serverSocket = socket;
stop = false;
socketChanged = true;
if (thread == null) {
thread = new I2PThread(this, "I2PSnark acceptor");
thread.setDaemon(true);
thread.start();
}
}
}
public ConnectionAcceptor(I2PServerSocket serverSocket,
PeerAcceptor peeracceptor)
{
this.serverSocket = serverSocket;
this.peeracceptor = peeracceptor;
socketChanged = false;
stop = false;
thread = new I2PThread(this, "I2PSnark acceptor");
thread.setDaemon(true);
thread.start();
}
public void halt()
{
if (true) throw new RuntimeException("wtf");
stop = true;
I2PServerSocket ss = serverSocket;
if (ss != null)
try
{
ss.close();
}
catch(I2PException ioe) { }
Thread t = thread;
if (t != null)
t.interrupt();
}
public void restart() {
serverSocket = I2PSnarkUtil.instance().getServerSocket();
socketChanged = true;
Thread t = thread;
if (t != null)
t.interrupt();
}
public int getPort()
{
return 6881; // serverSocket.getLocalPort();
}
public void run()
{
while(!stop)
{
if (socketChanged) {
// ok, already updated
socketChanged = false;
}
while ( (serverSocket == null) && (!stop)) {
serverSocket = I2PSnarkUtil.instance().getServerSocket();
if (serverSocket == null)
try { Thread.sleep(10*1000); } catch (InterruptedException ie) {}
}
try
{
I2PSocket socket = serverSocket.accept();
if (socket == null) {
if (socketChanged) {
continue;
} else {
I2PServerSocket ss = I2PSnarkUtil.instance().getServerSocket();
if (ss != serverSocket) {
serverSocket = ss;
socketChanged = true;
}
}
} else {
Thread t = new I2PThread(new Handler(socket), "Connection-" + socket);
t.start();
}
}
catch (I2PException ioe)
{
if (!socketChanged) {
Snark.debug("Error while accepting: " + ioe, Snark.ERROR);
stop = true;
}
}
catch (IOException ioe)
{
Snark.debug("Error while accepting: " + ioe, Snark.ERROR);
stop = true;
}
}
try
{
if (serverSocket != null)
serverSocket.close();
}
catch (I2PException ignored) { }
throw new RuntimeException("wtf");
}
private class Handler implements Runnable {
private I2PSocket _socket;
public Handler(I2PSocket socket) {
_socket = socket;
}
public void run() {
try {
InputStream in = _socket.getInputStream();
OutputStream out = _socket.getOutputStream();
if (true) {
in = new BufferedInputStream(in);
//out = new BufferedOutputStream(out);
}
if (_log.shouldLog(Log.DEBUG))
_log.debug("Handling socket from " + _socket.getPeerDestination().calculateHash().toBase64());
peeracceptor.connection(_socket, in, out);
} catch (IOException ioe) {
if (_log.shouldLog(Log.DEBUG))
_log.debug("Error handling connection from " + _socket.getPeerDestination().calculateHash().toBase64(), ioe);
try { _socket.close(); } catch (IOException ignored) { }
}
}
}
}

View File

@ -0,0 +1,33 @@
/* CoordinatorListener.java - Callback when a peer changes state
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark;
/**
* Callback used when some peer changes state.
*/
public interface CoordinatorListener
{
/**
* Called when the PeerCoordinator notices a change in the state of a peer.
*/
void peerChange(PeerCoordinator coordinator, Peer peer);
}

View File

@ -0,0 +1,299 @@
package org.klomp.snark;
import java.io.File;
import java.io.IOException;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Iterator;
import java.util.Map;
import java.util.Properties;
import java.util.Set;
import net.i2p.I2PAppContext;
import net.i2p.I2PException;
import net.i2p.client.I2PSession;
import net.i2p.client.streaming.I2PServerSocket;
import net.i2p.client.streaming.I2PSocket;
import net.i2p.client.streaming.I2PSocketManager;
import net.i2p.client.streaming.I2PSocketManagerFactory;
import net.i2p.data.DataFormatException;
import net.i2p.data.Destination;
import net.i2p.data.Hash;
import net.i2p.util.EepGet;
import net.i2p.util.Log;
import net.i2p.util.SimpleTimer;
/**
* I2P specific helpers for I2PSnark
*/
public class I2PSnarkUtil {
private I2PAppContext _context;
private Log _log;
private static I2PSnarkUtil _instance = new I2PSnarkUtil();
public static I2PSnarkUtil instance() { return _instance; }
private boolean _shouldProxy;
private String _proxyHost;
private int _proxyPort;
private String _i2cpHost;
private int _i2cpPort;
private Map _opts;
private I2PSocketManager _manager;
private boolean _configured;
private Set _shitlist;
private int _maxUploaders;
private int _maxUpBW;
private I2PSnarkUtil() {
_context = I2PAppContext.getGlobalContext();
_log = _context.logManager().getLog(Snark.class);
_opts = new HashMap();
setProxy("127.0.0.1", 4444);
setI2CPConfig("127.0.0.1", 7654, null);
_shitlist = new HashSet(64);
_configured = false;
_maxUploaders = Snark.MAX_TOTAL_UPLOADERS;
}
/**
* Specify what HTTP proxy tracker requests should go through (specify a null
* host for no proxying)
*
*/
public void setProxy(String host, int port) {
if ( (host != null) && (port > 0) ) {
_shouldProxy = true;
_proxyHost = host;
_proxyPort = port;
} else {
_shouldProxy = false;
_proxyHost = null;
_proxyPort = -1;
}
_configured = true;
}
public boolean configured() { return _configured; }
public void setI2CPConfig(String i2cpHost, int i2cpPort, Map opts) {
_i2cpHost = i2cpHost;
_i2cpPort = i2cpPort;
if (opts != null)
_opts.putAll(opts);
_configured = true;
}
public void setMaxUploaders(int limit) {
_maxUploaders = limit;
_configured = true;
}
public void setMaxUpBW(int limit) {
_maxUpBW = limit;
_configured = true;
}
public String getI2CPHost() { return _i2cpHost; }
public int getI2CPPort() { return _i2cpPort; }
public Map getI2CPOptions() { return _opts; }
public String getEepProxyHost() { return _proxyHost; }
public int getEepProxyPort() { return _proxyPort; }
public boolean getEepProxySet() { return _shouldProxy; }
public int getMaxUploaders() { return _maxUploaders; }
public int getMaxUpBW() { return _maxUpBW; }
/**
* Connect to the router, if we aren't already
*/
synchronized public boolean connect() {
if (_manager == null) {
Properties opts = new Properties();
if (_opts != null) {
for (Iterator iter = _opts.keySet().iterator(); iter.hasNext(); ) {
String key = (String)iter.next();
opts.setProperty(key, _opts.get(key).toString());
}
}
if (opts.getProperty("inbound.nickname") == null)
opts.setProperty("inbound.nickname", "I2PSnark");
if (opts.getProperty("outbound.nickname") == null)
opts.setProperty("outbound.nickname", "I2PSnark");
if (opts.getProperty("i2p.streaming.inactivityTimeout") == null)
opts.setProperty("i2p.streaming.inactivityTimeout", "240000");
if (opts.getProperty("i2p.streaming.inactivityAction") == null)
opts.setProperty("i2p.streaming.inactivityAction", "1"); // 1 == disconnect, 2 == ping
if (opts.getProperty("i2p.streaming.initialWindowSize") == null)
opts.setProperty("i2p.streaming.initialWindowSize", "1");
if (opts.getProperty("i2p.streaming.slowStartGrowthRateFactor") == null)
opts.setProperty("i2p.streaming.slowStartGrowthRateFactor", "1");
//if (opts.getProperty("i2p.streaming.writeTimeout") == null)
// opts.setProperty("i2p.streaming.writeTimeout", "90000");
//if (opts.getProperty("i2p.streaming.readTimeout") == null)
// opts.setProperty("i2p.streaming.readTimeout", "120000");
_manager = I2PSocketManagerFactory.createManager(_i2cpHost, _i2cpPort, opts);
}
return (_manager != null);
}
public boolean connected() { return _manager != null; }
/**
* Destroy the destination itself
*/
public void disconnect() {
I2PSocketManager mgr = _manager;
_manager = null;
_shitlist.clear();
mgr.destroySocketManager();
}
/** connect to the given destination */
I2PSocket connect(PeerID peer) throws IOException {
Hash dest = peer.getAddress().calculateHash();
synchronized (_shitlist) {
if (_shitlist.contains(dest))
throw new IOException("Not trying to contact " + dest.toBase64() + ", as they are shitlisted");
}
try {
I2PSocket rv = _manager.connect(peer.getAddress());
if (rv != null) synchronized (_shitlist) { _shitlist.remove(dest); }
return rv;
} catch (I2PException ie) {
synchronized (_shitlist) {
_shitlist.add(dest);
}
SimpleTimer.getInstance().addEvent(new Unshitlist(dest), 10*60*1000);
throw new IOException("Unable to reach the peer " + peer + ": " + ie.getMessage());
}
}
private class Unshitlist implements SimpleTimer.TimedEvent {
private Hash _dest;
public Unshitlist(Hash dest) { _dest = dest; }
public void timeReached() { synchronized (_shitlist) { _shitlist.remove(_dest); } }
}
/**
* fetch the given URL, returning the file it is stored in, or null on error
*/
public File get(String url) { return get(url, true, 0); }
public File get(String url, boolean rewrite) { return get(url, rewrite, 0); }
public File get(String url, int retries) { return get(url, true, retries); }
public File get(String url, boolean rewrite, int retries) {
_log.debug("Fetching [" + url + "] proxy=" + _proxyHost + ":" + _proxyPort + ": " + _shouldProxy);
File out = null;
try {
out = File.createTempFile("i2psnark", "url", new File("."));
} catch (IOException ioe) {
ioe.printStackTrace();
out.delete();
return null;
}
String fetchURL = url;
if (rewrite)
fetchURL = rewriteAnnounce(url);
//_log.debug("Rewritten url [" + fetchURL + "]");
EepGet get = new EepGet(_context, _shouldProxy, _proxyHost, _proxyPort, retries, out.getAbsolutePath(), fetchURL);
if (get.fetch()) {
_log.debug("Fetch successful [" + url + "]: size=" + out.length());
return out;
} else {
_log.warn("Fetch failed [" + url + "]");
out.delete();
return null;
}
}
public I2PServerSocket getServerSocket() {
I2PSocketManager mgr = _manager;
if (mgr != null)
return mgr.getServerSocket();
else
return null;
}
String getOurIPString() {
if (_manager == null)
return "unknown";
I2PSession sess = _manager.getSession();
if (sess != null) {
Destination dest = sess.getMyDestination();
if (dest != null)
return dest.toBase64();
}
return "unknown";
}
Destination getDestination(String ip) {
if (ip == null) return null;
if (ip.endsWith(".i2p")) {
if (ip.length() < 520) { // key + ".i2p"
Destination dest = _context.namingService().lookup(ip);
if (dest != null)
return dest;
}
try {
return new Destination(ip.substring(0, ip.length()-4)); // sans .i2p
} catch (DataFormatException dfe) {
return null;
}
} else {
try {
return new Destination(ip);
} catch (DataFormatException dfe) {
return null;
}
}
}
public String lookup(String name) {
Destination dest = getDestination(name);
if (dest == null)
return null;
return dest.toBase64();
}
/**
* Given http://KEY.i2p/foo/announce turn it into http://i2p/KEY/foo/announce
* Given http://tracker.blah.i2p/foo/announce leave it alone
*/
String rewriteAnnounce(String origAnnounce) {
int destStart = "http://".length();
int destEnd = origAnnounce.indexOf(".i2p");
if (destEnd < destStart + 516)
return origAnnounce;
int pathStart = origAnnounce.indexOf('/', destEnd);
String rv = "http://i2p/" + origAnnounce.substring(destStart, destEnd) + origAnnounce.substring(pathStart);
//_log.debug("Rewriting [" + origAnnounce + "] as [" + rv + "]");
return rv;
}
/** hook between snark's logger and an i2p log */
void debug(String msg, int snarkDebugLevel, Throwable t) {
if (t instanceof OutOfMemoryError) {
try { Thread.sleep(100); } catch (InterruptedException ie) {}
try {
t.printStackTrace();
} catch (Throwable tt) {}
try {
System.out.println("OOM thread: " + Thread.currentThread().getName());
} catch (Throwable tt) {}
}
switch (snarkDebugLevel) {
case 0:
case 1:
_log.error(msg, t);
break;
case 2:
_log.warn(msg, t);
break;
case 3:
case 4:
_log.info(msg, t);
break;
case 5:
case 6:
default:
_log.debug(msg, t);
break;
}
}
}

View File

@ -0,0 +1,141 @@
/* Message - A protocol message which can be send through a DataOutputStream.
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark;
import java.io.DataOutputStream;
import java.io.IOException;
import net.i2p.util.SimpleTimer;
// Used to queue outgoing connections
// sendMessage() should be used to translate them to wire format.
class Message
{
final static byte KEEP_ALIVE = -1;
final static byte CHOKE = 0;
final static byte UNCHOKE = 1;
final static byte INTERESTED = 2;
final static byte UNINTERESTED = 3;
final static byte HAVE = 4;
final static byte BITFIELD = 5;
final static byte REQUEST = 6;
final static byte PIECE = 7;
final static byte CANCEL = 8;
// Not all fields are used for every message.
// KEEP_ALIVE doesn't have a real wire representation
byte type;
// Used for HAVE, REQUEST, PIECE and CANCEL messages.
int piece;
// Used for REQUEST, PIECE and CANCEL messages.
int begin;
int length;
// Used for PIECE and BITFIELD messages
byte[] data;
int off;
int len;
SimpleTimer.TimedEvent expireEvent;
/** Utility method for sending a message through a DataStream. */
void sendMessage(DataOutputStream dos) throws IOException
{
// KEEP_ALIVE is special.
if (type == KEEP_ALIVE)
{
dos.writeInt(0);
return;
}
// Calculate the total length in bytes
// Type is one byte.
int datalen = 1;
// piece is 4 bytes.
if (type == HAVE || type == REQUEST || type == PIECE || type == CANCEL)
datalen += 4;
// begin/offset is 4 bytes
if (type == REQUEST || type == PIECE || type == CANCEL)
datalen += 4;
// length is 4 bytes
if (type == REQUEST || type == CANCEL)
datalen += 4;
// add length of data for piece or bitfield array.
if (type == BITFIELD || type == PIECE)
datalen += len;
// Send length
dos.writeInt(datalen);
dos.writeByte(type & 0xFF);
// Send additional info (piece number)
if (type == HAVE || type == REQUEST || type == PIECE || type == CANCEL)
dos.writeInt(piece);
// Send additional info (begin/offset)
if (type == REQUEST || type == PIECE || type == CANCEL)
dos.writeInt(begin);
// Send additional info (length); for PIECE this is implicit.
if (type == REQUEST || type == CANCEL)
dos.writeInt(length);
// Send actual data
if (type == BITFIELD || type == PIECE)
dos.write(data, off, len);
}
public String toString()
{
switch (type)
{
case KEEP_ALIVE:
return "KEEP_ALIVE";
case CHOKE:
return "CHOKE";
case UNCHOKE:
return "UNCHOKE";
case INTERESTED:
return "INTERESTED";
case UNINTERESTED:
return "UNINTERESTED";
case HAVE:
return "HAVE(" + piece + ")";
case BITFIELD:
return "BITFIELD";
case REQUEST:
return "REQUEST(" + piece + "," + begin + "," + length + ")";
case PIECE:
return "PIECE(" + piece + "," + begin + "," + length + ")";
case CANCEL:
return "CANCEL(" + piece + "," + begin + "," + length + ")";
default:
return "<UNKNOWN>";
}
}
}

View File

@ -0,0 +1,463 @@
/* MetaInfo - Holds all information gotten from a torrent file.
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark;
import java.io.IOException;
import java.io.InputStream;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import net.i2p.crypto.SHA1;
import net.i2p.data.Base64;
import net.i2p.util.Log;
import org.klomp.snark.bencode.BDecoder;
import org.klomp.snark.bencode.BEValue;
import org.klomp.snark.bencode.BEncoder;
import org.klomp.snark.bencode.InvalidBEncodingException;
/**
* Note: this class is buggy, as it doesn't propogate custom meta fields into the bencoded
* info data, and from there to the info_hash. At the moment, though, it seems to work with
* torrents created by I2P-BT, I2PRufus and Azureus.
*
*/
public class MetaInfo
{
private static final Log _log = new Log(MetaInfo.class);
private final String announce;
private final byte[] info_hash;
private final String name;
private final String name_utf8;
private final List files;
private final List files_utf8;
private final List lengths;
private final int piece_length;
private final byte[] piece_hashes;
private final long length;
private final Map infoMap;
private byte[] torrentdata;
MetaInfo(String announce, String name, String name_utf8, List files, List lengths,
int piece_length, byte[] piece_hashes, long length)
{
this.announce = announce;
this.name = name;
this.name_utf8 = name_utf8;
this.files = files;
this.files_utf8 = null;
this.lengths = lengths;
this.piece_length = piece_length;
this.piece_hashes = piece_hashes;
this.length = length;
this.info_hash = calculateInfoHash();
infoMap = null;
}
/**
* Creates a new MetaInfo from the given InputStream. The
* InputStream must start with a correctly bencoded dictonary
* describing the torrent.
*/
public MetaInfo(InputStream in) throws IOException
{
this(new BDecoder(in));
}
/**
* Creates a new MetaInfo from the given BDecoder. The BDecoder
* must have a complete dictionary describing the torrent.
*/
public MetaInfo(BDecoder be) throws IOException
{
// Note that evaluation order matters here...
this(be.bdecodeMap().getMap());
}
/**
* Creates a new MetaInfo from a Map of BEValues and the SHA1 over
* the original bencoded info dictonary (this is a hack, we could
* reconstruct the bencoded stream and recalculate the hash). Will
* throw a InvalidBEncodingException if the given map does not
* contain a valid announce string or info dictonary.
*/
public MetaInfo(Map m) throws InvalidBEncodingException
{
_log.debug("Creating a metaInfo: " + m, new Exception("source"));
BEValue val = (BEValue)m.get("announce");
if (val == null)
throw new InvalidBEncodingException("Missing announce string");
this.announce = val.getString();
val = (BEValue)m.get("info");
if (val == null)
throw new InvalidBEncodingException("Missing info map");
Map info = val.getMap();
infoMap = info;
val = (BEValue)info.get("name");
if (val == null)
throw new InvalidBEncodingException("Missing name string");
name = val.getString();
val = (BEValue)info.get("name.utf-8");
if (val != null)
name_utf8 = val.getString();
else
name_utf8 = null;
val = (BEValue)info.get("piece length");
if (val == null)
throw new InvalidBEncodingException("Missing piece length number");
piece_length = val.getInt();
val = (BEValue)info.get("pieces");
if (val == null)
throw new InvalidBEncodingException("Missing piece bytes");
piece_hashes = val.getBytes();
val = (BEValue)info.get("length");
if (val != null)
{
// Single file case.
length = val.getLong();
files = null;
files_utf8 = null;
lengths = null;
}
else
{
// Multi file case.
val = (BEValue)info.get("files");
if (val == null)
throw new InvalidBEncodingException
("Missing length number and/or files list");
List list = val.getList();
int size = list.size();
if (size == 0)
throw new InvalidBEncodingException("zero size files list");
files = new ArrayList(size);
files_utf8 = new ArrayList(size);
lengths = new ArrayList(size);
long l = 0;
for (int i = 0; i < list.size(); i++)
{
Map desc = ((BEValue)list.get(i)).getMap();
val = (BEValue)desc.get("length");
if (val == null)
throw new InvalidBEncodingException("Missing length number");
long len = val.getLong();
lengths.add(new Long(len));
l += len;
val = (BEValue)desc.get("path");
if (val == null)
throw new InvalidBEncodingException("Missing path list");
List path_list = val.getList();
int path_length = path_list.size();
if (path_length == 0)
throw new InvalidBEncodingException("zero size file path list");
List file = new ArrayList(path_length);
Iterator it = path_list.iterator();
while (it.hasNext())
file.add(((BEValue)it.next()).getString());
files.add(file);
val = (BEValue)desc.get("path.utf-8");
if (val != null) {
path_list = val.getList();
path_length = path_list.size();
if (path_length > 0) {
file = new ArrayList(path_length);
it = path_list.iterator();
while (it.hasNext())
file.add(((BEValue)it.next()).getString());
files_utf8.add(file);
}
}
}
length = l;
}
info_hash = calculateInfoHash();
}
/**
* Returns the string representing the URL of the tracker for this torrent.
*/
public String getAnnounce()
{
return announce;
}
/**
* Returns the original 20 byte SHA1 hash over the bencoded info map.
*/
public byte[] getInfoHash()
{
// XXX - Should we return a clone, just to be sure?
return info_hash;
}
/**
* Returns the piece hashes. Only used by storage so package local.
*/
byte[] getPieceHashes()
{
return piece_hashes;
}
/**
* Returns the requested name for the file or toplevel directory.
* If it is a toplevel directory name getFiles() will return a
* non-null List of file name hierarchy name.
*/
public String getName()
{
return name;
}
/**
* Returns a list of lists of file name hierarchies or null if it is
* a single name. It has the same size as the list returned by
* getLengths().
*/
public List getFiles()
{
// XXX - Immutable?
return files;
}
/**
* Returns a list of Longs indication the size of the individual
* files, or null if it is a single file. It has the same size as
* the list returned by getFiles().
*/
public List getLengths()
{
// XXX - Immutable?
return lengths;
}
/**
* Returns the number of pieces.
*/
public int getPieces()
{
return piece_hashes.length/20;
}
/**
* Return the length of a piece. All pieces are of equal length
* except for the last one (<code>getPieces()-1</code>).
*
* @exception IndexOutOfBoundsException when piece is equal to or
* greater then the number of pieces in the torrent.
*/
public int getPieceLength(int piece)
{
int pieces = getPieces();
if (piece >= 0 && piece < pieces -1)
return piece_length;
else if (piece == pieces -1)
return (int)(length - piece * piece_length);
else
throw new IndexOutOfBoundsException("no piece: " + piece);
}
/**
* Checks that the given piece has the same SHA1 hash as the given
* byte array. Returns random results or IndexOutOfBoundsExceptions
* when the piece number is unknown.
*/
public boolean checkPiece(int piece, byte[] bs, int off, int length)
{
if (true)
return fast_checkPiece(piece, bs, off, length);
else
return orig_checkPiece(piece, bs, off, length);
}
private boolean orig_checkPiece(int piece, byte[] bs, int off, int length) {
// Check digest
MessageDigest sha1;
try
{
sha1 = MessageDigest.getInstance("SHA");
}
catch (NoSuchAlgorithmException nsae)
{
throw new InternalError("No SHA digest available: " + nsae);
}
sha1.update(bs, off, length);
byte[] hash = sha1.digest();
for (int i = 0; i < 20; i++)
if (hash[i] != piece_hashes[20 * piece + i])
return false;
return true;
}
private boolean fast_checkPiece(int piece, byte[] bs, int off, int length) {
SHA1 sha1 = new SHA1();
sha1.update(bs, off, length);
byte[] hash = sha1.digest();
for (int i = 0; i < 20; i++)
if (hash[i] != piece_hashes[20 * piece + i])
return false;
return true;
}
/**
* Returns the total length of the torrent in bytes.
*/
public long getTotalLength()
{
return length;
}
public String toString()
{
return "MetaInfo[info_hash='" + hexencode(info_hash)
+ "', announce='" + announce
+ "', name='" + name
+ "', files=" + files
+ ", #pieces='" + piece_hashes.length/20
+ "', piece_length='" + piece_length
+ "', length='" + length
+ "']";
}
/**
* Encode a byte array as a hex encoded string.
*/
private static String hexencode(byte[] bs)
{
StringBuffer sb = new StringBuffer(bs.length*2);
for (int i = 0; i < bs.length; i++)
{
int c = bs[i] & 0xFF;
if (c < 16)
sb.append('0');
sb.append(Integer.toHexString(c));
}
return sb.toString();
}
/**
* Creates a copy of this MetaInfo that shares everything except the
* announce URL.
*/
public MetaInfo reannounce(String announce)
{
return new MetaInfo(announce, name, name_utf8, files,
lengths, piece_length,
piece_hashes, length);
}
public byte[] getTorrentData()
{
if (torrentdata == null)
{
Map m = new HashMap();
m.put("announce", announce);
Map info = createInfoMap();
m.put("info", info);
torrentdata = BEncoder.bencode(m);
}
return torrentdata;
}
private Map createInfoMap()
{
Map info = new HashMap();
if (infoMap != null) {
info.putAll(infoMap);
return info;
}
info.put("name", name);
if (name_utf8 != null)
info.put("name.utf-8", name_utf8);
info.put("piece length", new Integer(piece_length));
info.put("pieces", piece_hashes);
if (files == null)
info.put("length", new Long(length));
else
{
List l = new ArrayList();
for (int i = 0; i < files.size(); i++)
{
Map file = new HashMap();
file.put("path", files.get(i));
if ( (files_utf8 != null) && (files_utf8.size() > i) )
file.put("path.utf-8", files_utf8.get(i));
file.put("length", lengths.get(i));
l.add(file);
}
info.put("files", l);
}
return info;
}
private byte[] calculateInfoHash()
{
Map info = createInfoMap();
StringBuffer buf = new StringBuffer(128);
buf.append("info: ");
for (Iterator iter = info.keySet().iterator(); iter.hasNext(); ) {
String key = (String)iter.next();
Object val = info.get(key);
buf.append(key).append('=');
if (val instanceof byte[])
buf.append(Base64.encode((byte[])val, true));
else
buf.append(val.toString());
}
_log.debug(buf.toString());
byte[] infoBytes = BEncoder.bencode(info);
//_log.debug("info bencoded: [" + Base64.encode(infoBytes, true) + "]");
try
{
MessageDigest digest = MessageDigest.getInstance("SHA");
byte hash[] = digest.digest(infoBytes);
_log.debug("info hash: [" + net.i2p.data.Base64.encode(hash) + "]");
return hash;
}
catch(NoSuchAlgorithmException nsa)
{
throw new InternalError(nsa.toString());
}
}
}

View File

@ -0,0 +1,573 @@
/* Peer - All public information concerning a peer.
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark;
import java.io.BufferedInputStream;
import java.io.DataInputStream;
import java.io.DataOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.util.Arrays;
import net.i2p.client.streaming.I2PSocket;
import net.i2p.util.Log;
public class Peer implements Comparable
{
private Log _log = new Log(Peer.class);
// Identifying property, the peer id of the other side.
private final PeerID peerID;
private final byte[] my_id;
final MetaInfo metainfo;
// The data in/output streams set during the handshake and used by
// the actual connections.
private DataInputStream din;
private DataOutputStream dout;
// Keeps state for in/out connections. Non-null when the handshake
// was successful, the connection setup and runs
PeerState state;
private I2PSocket sock;
private boolean deregister = true;
private static long __id;
private long _id;
final static long CHECK_PERIOD = PeerCoordinator.CHECK_PERIOD; // 40 seconds
final static int RATE_DEPTH = PeerCoordinator.RATE_DEPTH; // make following arrays RATE_DEPTH long
private long uploaded_old[] = {-1,-1,-1,-1,-1,-1};
private long downloaded_old[] = {-1,-1,-1,-1,-1,-1};
/**
* Creates a disconnected peer given a PeerID, your own id and the
* relevant MetaInfo.
*/
public Peer(PeerID peerID, byte[] my_id, MetaInfo metainfo)
throws IOException
{
this.peerID = peerID;
this.my_id = my_id;
this.metainfo = metainfo;
_id = ++__id;
//_log.debug("Creating a new peer with " + peerID.getAddress().calculateHash().toBase64(), new Exception("creating"));
}
/**
* Creates a unconnected peer from the input and output stream got
* from the socket. Note that the complete handshake (which can take
* some time or block indefinitely) is done in the calling Thread to
* get the remote peer id. To completely start the connection call
* the connect() method.
*
* @exception IOException when an error occurred during the handshake.
*/
public Peer(final I2PSocket sock, InputStream in, OutputStream out, byte[] my_id, MetaInfo metainfo)
throws IOException
{
this.my_id = my_id;
this.metainfo = metainfo;
this.sock = sock;
byte[] id = handshake(in, out);
this.peerID = new PeerID(id, sock.getPeerDestination());
_id = ++__id;
_log.debug("Creating a new peer with " + peerID.getAddress().calculateHash().toBase64(), new Exception("creating " + _id));
}
/**
* Returns the id of the peer.
*/
public PeerID getPeerID()
{
return peerID;
}
/**
* Returns the String representation of the peerID.
*/
public String toString()
{
if (peerID != null)
return peerID.toString() + ' ' + _id;
else
return "[unknown id] " + _id;
}
/**
* Returns socket (for debug printing)
*/
public String getSocket()
{
return sock.toString();
}
/**
* The hash code of a Peer is the hash code of the peerID.
*/
public int hashCode()
{
return peerID.hashCode() ^ (2 << _id);
}
/**
* Two Peers are equal when they have the same PeerID.
* All other properties are ignored.
*/
public boolean equals(Object o)
{
if (o instanceof Peer)
{
Peer p = (Peer)o;
return _id == p._id && peerID.equals(p.peerID);
}
else
return false;
}
/**
* Compares the PeerIDs.
*/
public int compareTo(Object o)
{
Peer p = (Peer)o;
int rv = peerID.compareTo(p.peerID);
if (rv == 0) {
if (_id > p._id) return 1;
else if (_id < p._id) return -1;
else return 0;
} else {
return rv;
}
}
/**
* Runs the connection to the other peer. This method does not
* return until the connection is terminated.
*
* When the connection is correctly started the connected() method
* of the given PeerListener is called. If the connection ends or
* the connection could not be setup correctly the disconnected()
* method is called.
*
* If the given BitField is non-null it is send to the peer as first
* message.
*/
public void runConnection(PeerListener listener, BitField bitfield)
{
if (state != null)
throw new IllegalStateException("Peer already started");
_log.debug("Running connection to " + peerID.getAddress().calculateHash().toBase64(), new Exception("connecting"));
try
{
// Do we need to handshake?
if (din == null)
{
sock = I2PSnarkUtil.instance().connect(peerID);
_log.debug("Connected to " + peerID + ": " + sock);
if ((sock == null) || (sock.isClosed())) {
throw new IOException("Unable to reach " + peerID);
}
InputStream in = sock.getInputStream();
OutputStream out = sock.getOutputStream(); //new BufferedOutputStream(sock.getOutputStream());
if (true) {
// buffered output streams are internally synchronized, so we can't get through to the underlying
// I2PSocket's MessageOutputStream to close() it if we are blocking on a write(...). Oh, and the
// buffer is unnecessary anyway, as unbuffered access lets the streaming lib do the 'right thing'.
//out = new BufferedOutputStream(out);
in = new BufferedInputStream(sock.getInputStream());
}
//BufferedInputStream bis
// = new BufferedInputStream(sock.getInputStream());
//BufferedOutputStream bos
// = new BufferedOutputStream(sock.getOutputStream());
byte [] id = handshake(in, out); //handshake(bis, bos);
byte [] expected_id = peerID.getID();
if (!Arrays.equals(expected_id, id))
throw new IOException("Unexpected peerID '"
+ PeerID.idencode(id)
+ "' expected '"
+ PeerID.idencode(expected_id) + "'");
_log.debug("Handshake got matching IDs with " + toString());
} else {
_log.debug("Already have din [" + sock + "] with " + toString());
}
PeerConnectionIn in = new PeerConnectionIn(this, din);
PeerConnectionOut out = new PeerConnectionOut(this, dout);
PeerState s = new PeerState(this, listener, metainfo, in, out);
// Send our bitmap
if (bitfield != null)
s.out.sendBitfield(bitfield);
// We are up and running!
state = s;
listener.connected(this);
_log.debug("Start running the reader with " + toString());
// Use this thread for running the incomming connection.
// The outgoing connection creates its own Thread.
out.startup();
Thread.currentThread().setName("Snark reader from " + peerID);
s.in.run();
}
catch(IOException eofe)
{
// Ignore, probably just the other side closing the connection.
// Or refusing the connection, timing out, etc.
if (_log.shouldLog(Log.DEBUG))
_log.debug(this.toString(), eofe);
}
catch(Throwable t)
{
_log.error(this + ": " + t.getMessage(), t);
if (t instanceof OutOfMemoryError)
throw (OutOfMemoryError)t;
}
finally
{
if (deregister) listener.disconnected(this);
disconnect();
}
}
/**
* Sets DataIn/OutputStreams, does the handshake and returns the id
* reported by the other side.
*/
private byte[] handshake(InputStream in, OutputStream out) //BufferedInputStream bis, BufferedOutputStream bos)
throws IOException
{
din = new DataInputStream(in);
dout = new DataOutputStream(out);
// Handshake write - header
dout.write(19);
dout.write("BitTorrent protocol".getBytes("UTF-8"));
// Handshake write - zeros
byte[] zeros = new byte[8];
dout.write(zeros);
// Handshake write - metainfo hash
byte[] shared_hash = metainfo.getInfoHash();
dout.write(shared_hash);
// Handshake write - peer id
dout.write(my_id);
dout.flush();
_log.debug("Wrote my shared hash and ID to " + toString());
// Handshake read - header
byte b = din.readByte();
if (b != 19)
throw new IOException("Handshake failure, expected 19, got "
+ (b & 0xff) + " on " + sock);
byte[] bs = new byte[19];
din.readFully(bs);
String bittorrentProtocol = new String(bs, "UTF-8");
if (!"BitTorrent protocol".equals(bittorrentProtocol))
throw new IOException("Handshake failure, expected "
+ "'Bittorrent protocol', got '"
+ bittorrentProtocol + "'");
// Handshake read - zeros
din.readFully(zeros);
// Handshake read - metainfo hash
bs = new byte[20];
din.readFully(bs);
if (!Arrays.equals(shared_hash, bs))
throw new IOException("Unexpected MetaInfo hash");
// Handshake read - peer id
din.readFully(bs);
_log.debug("Read the remote side's hash and peerID fully from " + toString());
return bs;
}
public boolean isConnected()
{
return state != null;
}
/**
* Disconnects this peer if it was connected. If deregister is
* true, PeerListener.disconnected() will be called when the
* connection is completely terminated. Otherwise the connection is
* silently terminated.
*/
public void disconnect(boolean deregister)
{
// Both in and out connection will call this.
this.deregister = deregister;
disconnect();
}
void disconnect()
{
PeerState s = state;
if (s != null)
{
// try to save partial piece
if (this.deregister) {
PeerListener p = s.listener;
if (p != null) {
p.savePeerPartial(s);
p.markUnrequested(this);
}
}
state = null;
PeerConnectionIn in = s.in;
if (in != null)
in.disconnect();
PeerConnectionOut out = s.out;
if (out != null)
out.disconnect();
PeerListener pl = s.listener;
if (pl != null)
pl.disconnected(this);
}
I2PSocket csock = sock;
sock = null;
if ( (csock != null) && (!csock.isClosed()) ) {
try {
csock.close();
} catch (IOException ioe) {
_log.warn("Error disconnecting " + toString(), ioe);
}
}
}
/**
* Tell the peer we have another piece.
*/
public void have(int piece)
{
PeerState s = state;
if (s != null)
s.havePiece(piece);
}
/**
* Whether or not the peer is interested in pieces we have. Returns
* false if not connected.
*/
public boolean isInterested()
{
PeerState s = state;
return (s != null) && s.interested;
}
/**
* Sets whether or not we are interested in pieces from this peer.
* Defaults to false. When interest is true and this peer unchokes
* us then we start downloading from it. Has no effect when not connected.
*/
public void setInteresting(boolean interest)
{
PeerState s = state;
if (s != null)
s.setInteresting(interest);
}
/**
* Whether or not the peer has pieces we want from it. Returns false
* if not connected.
*/
public boolean isInteresting()
{
PeerState s = state;
return (s != null) && s.interesting;
}
/**
* Sets whether or not we are choking the peer. Defaults to
* true. When choke is false and the peer requests some pieces we
* upload them, otherwise requests of this peer are ignored.
*/
public void setChoking(boolean choke)
{
PeerState s = state;
if (s != null)
s.setChoking(choke);
}
/**
* Whether or not we are choking the peer. Returns true when not connected.
*/
public boolean isChoking()
{
PeerState s = state;
return (s == null) || s.choking;
}
/**
* Whether or not the peer choked us. Returns true when not connected.
*/
public boolean isChoked()
{
PeerState s = state;
return (s == null) || s.choked;
}
/**
* Returns the number of bytes that have been downloaded.
* Can be reset to zero with <code>resetCounters()</code>/
*/
public long getDownloaded()
{
PeerState s = state;
return (s != null) ? s.downloaded : 0;
}
/**
* Returns the number of bytes that have been uploaded.
* Can be reset to zero with <code>resetCounters()</code>/
*/
public long getUploaded()
{
PeerState s = state;
return (s != null) ? s.uploaded : 0;
}
/**
* Resets the downloaded and uploaded counters to zero.
*/
public void resetCounters()
{
PeerState s = state;
if (s != null)
{
s.downloaded = 0;
s.uploaded = 0;
}
}
public long getInactiveTime() {
PeerState s = state;
if (s != null) {
PeerConnectionIn in = s.in;
PeerConnectionOut out = s.out;
if (in != null && out != null) {
long now = System.currentTimeMillis();
return Math.max(now - out.lastSent, now - in.lastRcvd);
} else
return -1; //"state, no out";
} else {
return -1; //"no state";
}
}
/**
* Send keepalive
*/
public void keepAlive()
{
PeerState s = state;
if (s != null)
s.keepAlive();
}
/**
* Retransmit outstanding requests if necessary
*/
public void retransmitRequests()
{
PeerState s = state;
if (s != null)
s.retransmitRequests();
}
/**
* Return how much the peer has
*/
public int completed()
{
PeerState s = state;
if (s == null || s.bitfield == null)
return 0;
return s.bitfield.count();
}
/**
* Return if a peer is a seeder
*/
public boolean isCompleted()
{
PeerState s = state;
if (s == null || s.bitfield == null)
return false;
return s.bitfield.complete();
}
/**
* Push the total uploaded/downloaded onto a RATE_DEPTH deep stack
*/
public void setRateHistory(long up, long down)
{
setRate(up, uploaded_old);
setRate(down, downloaded_old);
}
private void setRate(long val, long array[])
{
synchronized(array) {
for (int i = RATE_DEPTH-1; i > 0; i--)
array[i] = array[i-1];
array[0] = val;
}
}
/**
* Returns the 4-minute-average rate in Bps
*/
public long getUploadRate()
{
return getRate(uploaded_old);
}
public long getDownloadRate()
{
return getRate(downloaded_old);
}
private long getRate(long array[])
{
long rate = 0;
int i = 0;
synchronized(array) {
for ( ; i < RATE_DEPTH; i++){
if (array[i] < 0)
break;
rate += array[i];
}
}
if (i == 0)
return 0;
return rate / (i * CHECK_PERIOD / 1000);
}
}

View File

@ -0,0 +1,149 @@
/* PeerAcceptor - Accepts incomming connections from peers.
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark;
import java.io.BufferedInputStream;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.io.SequenceInputStream;
import java.util.Iterator;
import net.i2p.client.streaming.I2PSocket;
import net.i2p.data.Base64;
import net.i2p.data.DataHelper;
import net.i2p.util.Log;
/**
* Accepts incomming connections from peers. The ConnectionAcceptor
* will call the connection() method when it detects an incomming BT
* protocol connection. The PeerAcceptor will then create a new peer
* if the PeerCoordinator wants more peers.
*/
public class PeerAcceptor
{
private static final Log _log = new Log(PeerAcceptor.class);
private final PeerCoordinator coordinator;
final PeerCoordinatorSet coordinators;
public PeerAcceptor(PeerCoordinator coordinator)
{
this.coordinator = coordinator;
this.coordinators = null;
}
public PeerAcceptor(PeerCoordinatorSet coordinators)
{
this.coordinators = coordinators;
this.coordinator = null;
}
public void connection(I2PSocket socket,
InputStream in, OutputStream out)
throws IOException
{
// inside this Peer constructor's handshake is where you'd deal with the other
// side saying they want to communicate with another torrent - aka multitorrent
// support, but because of how the protocol works, we can get away with just reading
// ahead the first $LOOKAHEAD_SIZE bytes to figure out which infohash they want to
// talk about, and we can just look for that in our list of active torrents.
byte peerInfoHash[] = null;
if (in instanceof BufferedInputStream) {
in.mark(LOOKAHEAD_SIZE);
peerInfoHash = readHash(in);
in.reset();
} else {
// is this working right?
try {
peerInfoHash = readHash(in);
_log.info("infohash read from " + socket.getPeerDestination().calculateHash().toBase64()
+ ": " + Base64.encode(peerInfoHash));
} catch (IOException ioe) {
_log.info("Unable to read the infohash from " + socket.getPeerDestination().calculateHash().toBase64());
throw ioe;
}
in = new SequenceInputStream(new ByteArrayInputStream(peerInfoHash), in);
}
if (coordinator != null) {
// single torrent capability
MetaInfo meta = coordinator.getMetaInfo();
if (DataHelper.eq(meta.getInfoHash(), peerInfoHash)) {
if (coordinator.needPeers())
{
Peer peer = new Peer(socket, in, out, coordinator.getID(),
coordinator.getMetaInfo());
coordinator.addPeer(peer);
}
else
socket.close();
} else {
// its for another infohash, but we are only single torrent capable. b0rk.
throw new IOException("Peer wants another torrent (" + Base64.encode(peerInfoHash)
+ ") while we only support (" + Base64.encode(meta.getInfoHash()) + ")");
}
} else {
// multitorrent capable, so lets see what we can handle
for (Iterator iter = coordinators.iterator(); iter.hasNext(); ) {
PeerCoordinator cur = (PeerCoordinator)iter.next();
MetaInfo meta = cur.getMetaInfo();
if (DataHelper.eq(meta.getInfoHash(), peerInfoHash)) {
if (cur.needPeers())
{
Peer peer = new Peer(socket, in, out, cur.getID(),
cur.getMetaInfo());
cur.addPeer(peer);
return;
}
else
{
if (_log.shouldLog(Log.DEBUG))
_log.debug("Rejecting new peer for " + cur.snark.torrent);
socket.close();
return;
}
}
}
// this is only reached if none of the coordinators match the infohash
throw new IOException("Peer wants another torrent (" + Base64.encode(peerInfoHash)
+ ") while we don't support that hash");
}
}
private static final int LOOKAHEAD_SIZE = "19".length() +
"BitTorrent protocol".length() +
8 + // blank, reserved
20; // infohash
/**
* Read ahead to the infohash, throwing an exception if there isn't enough data
*/
private byte[] readHash(InputStream in) throws IOException {
byte buf[] = new byte[LOOKAHEAD_SIZE];
int read = DataHelper.read(in, buf);
if (read != buf.length)
throw new IOException("Unable to read the hash (read " + read + ")");
byte rv[] = new byte[20];
System.arraycopy(buf, buf.length-rv.length-1, rv, 0, rv.length);
return rv;
}
}

View File

@ -0,0 +1,252 @@
/* PeerCheckTasks - TimerTask that checks for good/bad up/downloaders.
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.Random;
import java.util.TimerTask;
/**
* TimerTask that checks for good/bad up/downloader. Works together
* with the PeerCoordinator to select which Peers get (un)choked.
*/
class PeerCheckerTask extends TimerTask
{
private final long KILOPERSECOND = 1024*(PeerCoordinator.CHECK_PERIOD/1000);
private final PeerCoordinator coordinator;
PeerCheckerTask(PeerCoordinator coordinator)
{
this.coordinator = coordinator;
}
private Random random = new Random();
public void run()
{
synchronized(coordinator.peers)
{
Iterator it = coordinator.peers.iterator();
if ((!it.hasNext()) || coordinator.halted()) {
coordinator.peerCount = 0;
coordinator.interestedAndChoking = 0;
coordinator.setRateHistory(0, 0);
coordinator.uploaders = 0;
if (coordinator.halted())
cancel();
return;
}
// Calculate total uploading and worst downloader.
long worstdownload = Long.MAX_VALUE;
Peer worstDownloader = null;
int peers = 0;
int uploaders = 0;
int downloaders = 0;
int removedCount = 0;
long uploaded = 0;
long downloaded = 0;
// Keep track of peers we remove now,
// we will add them back to the end of the list.
List removed = new ArrayList();
int uploadLimit = coordinator.allowedUploaders();
boolean overBWLimit = coordinator.overUpBWLimit();
while (it.hasNext())
{
Peer peer = (Peer)it.next();
// Remove dying peers
if (!peer.isConnected())
{
it.remove();
coordinator.removePeerFromPieces(peer);
coordinator.peerCount = coordinator.peers.size();
continue;
}
peers++;
if (!peer.isChoking())
uploaders++;
if (!peer.isChoked() && peer.isInteresting())
downloaders++;
long upload = peer.getUploaded();
uploaded += upload;
long download = peer.getDownloaded();
downloaded += download;
peer.setRateHistory(upload, download);
peer.resetCounters();
Snark.debug(peer + ":", Snark.DEBUG);
Snark.debug(" ul: " + upload/KILOPERSECOND
+ " dl: " + download/KILOPERSECOND
+ " i: " + peer.isInterested()
+ " I: " + peer.isInteresting()
+ " c: " + peer.isChoking()
+ " C: " + peer.isChoked(),
Snark.DEBUG);
// Choke half of them rather than all so it isn't so drastic...
// unless this torrent is over the limit all by itself.
boolean overBWLimitChoke = upload > 0 &&
((overBWLimit && random.nextBoolean()) ||
(coordinator.overUpBWLimit(uploaded)));
// If we are at our max uploaders and we have lots of other
// interested peers try to make some room.
// (Note use of coordinator.uploaders)
if (((coordinator.uploaders == uploadLimit
&& coordinator.interestedAndChoking > 0)
|| coordinator.uploaders > uploadLimit
|| overBWLimitChoke)
&& !peer.isChoking())
{
// Check if it still wants pieces from us.
if (!peer.isInterested())
{
Snark.debug("Choke uninterested peer: " + peer,
Snark.INFO);
peer.setChoking(true);
uploaders--;
coordinator.uploaders--;
// Put it at the back of the list
it.remove();
removed.add(peer);
}
else if (overBWLimitChoke)
{
Snark.debug("BW limit (" + upload + "/" + uploaded + "), choke peer: " + peer,
Snark.INFO);
peer.setChoking(true);
uploaders--;
coordinator.uploaders--;
removedCount++;
// Put it at the back of the list for fairness, even though we won't be unchoking this time
it.remove();
removed.add(peer);
}
else if (peer.isInteresting() && peer.isChoked())
{
// If they are choking us make someone else a downloader
Snark.debug("Choke choking peer: " + peer, Snark.DEBUG);
peer.setChoking(true);
uploaders--;
coordinator.uploaders--;
removedCount++;
// Put it at the back of the list
it.remove();
removed.add(peer);
}
else if (!peer.isInteresting() && !coordinator.completed())
{
// If they aren't interesting make someone else a downloader
Snark.debug("Choke uninteresting peer: " + peer, Snark.DEBUG);
peer.setChoking(true);
uploaders--;
coordinator.uploaders--;
removedCount++;
// Put it at the back of the list
it.remove();
removed.add(peer);
}
else if (peer.isInteresting()
&& !peer.isChoked()
&& download == 0)
{
// We are downloading but didn't receive anything...
Snark.debug("Choke downloader that doesn't deliver:"
+ peer, Snark.DEBUG);
peer.setChoking(true);
uploaders--;
coordinator.uploaders--;
removedCount++;
// Put it at the back of the list
it.remove();
removed.add(peer);
}
else if (peer.isInteresting() && !peer.isChoked() &&
download < worstdownload)
{
// Make sure download is good if we are uploading
worstdownload = download;
worstDownloader = peer;
}
else if (upload < worstdownload && coordinator.completed())
{
// Make sure upload is good if we are seeding
worstdownload = upload;
worstDownloader = peer;
}
}
peer.retransmitRequests();
peer.keepAlive();
}
// Resync actual uploaders value
// (can shift a bit by disconnecting peers)
coordinator.uploaders = uploaders;
// Remove the worst downloader if needed. (uploader if seeding)
if (((uploaders == uploadLimit
&& coordinator.interestedAndChoking > 0)
|| uploaders > uploadLimit)
&& worstDownloader != null)
{
Snark.debug("Choke worst downloader: " + worstDownloader,
Snark.DEBUG);
worstDownloader.setChoking(true);
coordinator.uploaders--;
removedCount++;
// Put it at the back of the list
coordinator.peers.remove(worstDownloader);
coordinator.peerCount = coordinator.peers.size();
removed.add(worstDownloader);
}
// Optimistically unchoke a peer
if ((!overBWLimit) && !coordinator.overUpBWLimit(uploaded))
coordinator.unchokePeer();
// Put peers back at the end of the list that we removed earlier.
coordinator.peers.addAll(removed);
coordinator.peerCount = coordinator.peers.size();
coordinator.interestedAndChoking += removedCount;
// store the rates
coordinator.setRateHistory(uploaded, downloaded);
}
}
}

View File

@ -0,0 +1,199 @@
/* PeerConnectionIn - Handles incomming messages and hands them to PeerState.
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark;
import java.io.DataInputStream;
import java.io.IOException;
import net.i2p.util.Log;
class PeerConnectionIn implements Runnable
{
private Log _log = new Log(PeerConnectionIn.class);
private final Peer peer;
private final DataInputStream din;
private Thread thread;
private volatile boolean quit;
long lastRcvd;
public PeerConnectionIn(Peer peer, DataInputStream din)
{
this.peer = peer;
this.din = din;
lastRcvd = System.currentTimeMillis();
quit = false;
}
void disconnect()
{
if (quit == true)
return;
quit = true;
Thread t = thread;
if (t != null)
t.interrupt();
if (din != null) {
try {
din.close();
} catch (IOException ioe) {
_log.warn("Error closing the stream from " + peer, ioe);
}
}
}
public void run()
{
thread = Thread.currentThread();
try
{
PeerState ps = peer.state;
while (!quit && ps != null)
{
// Common variables used for some messages.
int piece;
int begin;
int len;
// Wait till we hear something...
// The length of a complete message in bytes.
// The biggest is the piece message, for which the length is the
// request size (32K) plus 9. (we could also check if Storage.MAX_PIECES / 8
// in the bitfield message is bigger but it's currently 5000/8 = 625 so don't bother)
int i = din.readInt();
lastRcvd = System.currentTimeMillis();
if (i < 0 || i > PeerState.PARTSIZE + 9)
throw new IOException("Unexpected length prefix: " + i);
if (i == 0)
{
ps.keepAliveMessage();
if (_log.shouldLog(Log.DEBUG))
_log.debug("Received keepalive from " + peer + " on " + peer.metainfo.getName());
continue;
}
byte b = din.readByte();
Message m = new Message();
m.type = b;
switch (b)
{
case 0:
ps.chokeMessage(true);
if (_log.shouldLog(Log.DEBUG))
_log.debug("Received choke from " + peer + " on " + peer.metainfo.getName());
break;
case 1:
ps.chokeMessage(false);
if (_log.shouldLog(Log.DEBUG))
_log.debug("Received unchoke from " + peer + " on " + peer.metainfo.getName());
break;
case 2:
ps.interestedMessage(true);
if (_log.shouldLog(Log.DEBUG))
_log.debug("Received interested from " + peer + " on " + peer.metainfo.getName());
break;
case 3:
ps.interestedMessage(false);
if (_log.shouldLog(Log.DEBUG))
_log.debug("Received not interested from " + peer + " on " + peer.metainfo.getName());
break;
case 4:
piece = din.readInt();
ps.haveMessage(piece);
if (_log.shouldLog(Log.DEBUG))
_log.debug("Received havePiece(" + piece + ") from " + peer + " on " + peer.metainfo.getName());
break;
case 5:
byte[] bitmap = new byte[i-1];
din.readFully(bitmap);
ps.bitfieldMessage(bitmap);
if (_log.shouldLog(Log.DEBUG))
_log.debug("Received bitmap from " + peer + " on " + peer.metainfo.getName() + ": size=" + (i-1) + ": " + ps.bitfield);
break;
case 6:
piece = din.readInt();
begin = din.readInt();
len = din.readInt();
ps.requestMessage(piece, begin, len);
if (_log.shouldLog(Log.DEBUG))
_log.debug("Received request(" + piece + "," + begin + ") from " + peer + " on " + peer.metainfo.getName());
break;
case 7:
piece = din.readInt();
begin = din.readInt();
len = i-9;
Request req = ps.getOutstandingRequest(piece, begin, len);
byte[] piece_bytes;
if (req != null)
{
piece_bytes = req.bs;
din.readFully(piece_bytes, begin, len);
ps.pieceMessage(req);
if (_log.shouldLog(Log.DEBUG))
_log.debug("Received data(" + piece + "," + begin + ") from " + peer + " on " + peer.metainfo.getName());
}
else
{
// XXX - Consume but throw away afterwards.
piece_bytes = new byte[len];
din.readFully(piece_bytes);
if (_log.shouldLog(Log.DEBUG))
_log.debug("Received UNWANTED data(" + piece + "," + begin + ") from " + peer + " on " + peer.metainfo.getName());
}
break;
case 8:
piece = din.readInt();
begin = din.readInt();
len = din.readInt();
ps.cancelMessage(piece, begin, len);
if (_log.shouldLog(Log.DEBUG))
_log.debug("Received cancel(" + piece + "," + begin + ") from " + peer + " on " + peer.metainfo.getName());
break;
default:
byte[] bs = new byte[i-1];
din.readFully(bs);
ps.unknownMessage(b, bs);
if (_log.shouldLog(Log.DEBUG))
_log.debug("Received unknown message from " + peer + " on " + peer.metainfo.getName());
}
}
}
catch (IOException ioe)
{
// Ignore, probably the other side closed connection.
if (_log.shouldLog(Log.INFO))
_log.info("IOError talking with " + peer, ioe);
}
catch (Throwable t)
{
_log.error("Error talking with " + peer, t);
if (t instanceof OutOfMemoryError)
throw (OutOfMemoryError)t;
}
finally
{
peer.disconnect();
}
}
}

View File

@ -0,0 +1,472 @@
/* PeerConnectionOut - Keeps a queue of outgoing messages and delivers them.
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark;
import java.io.DataOutputStream;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import net.i2p.util.I2PThread;
import net.i2p.util.Log;
import net.i2p.util.SimpleTimer;
class PeerConnectionOut implements Runnable
{
private Log _log = new Log(PeerConnectionOut.class);
private final Peer peer;
private final DataOutputStream dout;
private Thread thread;
private boolean quit;
// Contains Messages.
private List sendQueue = new ArrayList();
private static long __id = 0;
private long _id;
long lastSent;
public PeerConnectionOut(Peer peer, DataOutputStream dout)
{
this.peer = peer;
this.dout = dout;
_id = ++__id;
lastSent = System.currentTimeMillis();
quit = false;
}
public void startup() {
thread = new I2PThread(this, "Snark sender " + _id + ": " + peer);
thread.start();
}
/**
* Continuesly monitors for more outgoing messages that have to be send.
* Stops if quit is true of an IOException occurs.
*/
public void run()
{
try
{
while (!quit && peer.isConnected())
{
Message m = null;
PeerState state = null;
boolean shouldFlush;
synchronized(sendQueue)
{
shouldFlush = !quit && peer.isConnected() && sendQueue.isEmpty();
}
if (shouldFlush)
// Make sure everything will reach the other side.
// flush while not holding lock, could take a long time
dout.flush();
synchronized(sendQueue)
{
while (!quit && peer.isConnected() && sendQueue.isEmpty())
{
try
{
// Make sure everything will reach the other side.
// don't flush while holding lock, could take a long time
// dout.flush();
// Wait till more data arrives.
sendQueue.wait(60*1000);
}
catch (InterruptedException ie)
{
/* ignored */
}
}
state = peer.state;
if (!quit && state != null && peer.isConnected())
{
// Piece messages are big. So if there are other
// (control) messages make sure they are send first.
// Also remove request messages from the queue if
// we are currently being choked to prevent them from
// being send even if we get unchoked a little later.
// (Since we will resent them anyway in that case.)
// And remove piece messages if we are choking.
// this should get fixed for starvation
Iterator it = sendQueue.iterator();
while (m == null && it.hasNext())
{
Message nm = (Message)it.next();
if (nm.type == Message.PIECE)
{
if (state.choking) {
it.remove();
SimpleTimer.getInstance().removeEvent(nm.expireEvent);
}
nm = null;
}
else if (nm.type == Message.REQUEST && state.choked)
{
it.remove();
SimpleTimer.getInstance().removeEvent(nm.expireEvent);
nm = null;
}
if (m == null && nm != null)
{
m = nm;
SimpleTimer.getInstance().removeEvent(nm.expireEvent);
it.remove();
}
}
if (m == null && sendQueue.size() > 0) {
m = (Message)sendQueue.remove(0);
SimpleTimer.getInstance().removeEvent(m.expireEvent);
}
}
}
if (m != null)
{
if (_log.shouldLog(Log.DEBUG))
_log.debug("Send " + peer + ": " + m + " on " + peer.metainfo.getName());
m.sendMessage(dout);
lastSent = System.currentTimeMillis();
// Remove all piece messages after sending a choke message.
if (m.type == Message.CHOKE)
removeMessage(Message.PIECE);
// XXX - Should also register overhead...
if (m.type == Message.PIECE)
state.uploaded(m.len);
m = null;
}
}
}
catch (IOException ioe)
{
// Ignore, probably other side closed connection.
if (_log.shouldLog(Log.INFO))
_log.info("IOError sending to " + peer, ioe);
}
catch (Throwable t)
{
_log.error("Error sending to " + peer, t);
if (t instanceof OutOfMemoryError)
throw (OutOfMemoryError)t;
}
finally
{
quit = true;
peer.disconnect();
}
}
public void disconnect()
{
synchronized(sendQueue)
{
//if (quit == true)
// return;
quit = true;
if (thread != null)
thread.interrupt();
sendQueue.clear();
sendQueue.notify();
}
if (dout != null) {
try {
dout.close();
} catch (IOException ioe) {
_log.warn("Error closing the stream to " + peer, ioe);
}
}
}
/**
* Adds a message to the sendQueue and notifies the method waiting
* on the sendQueue to change.
* If a PIECE message only, add a timeout.
*/
private void addMessage(Message m)
{
if (m.type == Message.PIECE)
SimpleTimer.getInstance().addEvent(new RemoveTooSlow(m), SEND_TIMEOUT);
synchronized(sendQueue)
{
sendQueue.add(m);
sendQueue.notifyAll();
}
}
/** remove messages not sent in 3m */
private static final int SEND_TIMEOUT = 3*60*1000;
private class RemoveTooSlow implements SimpleTimer.TimedEvent {
private Message _m;
public RemoveTooSlow(Message m) {
_m = m;
m.expireEvent = RemoveTooSlow.this;
}
public void timeReached() {
boolean removed = false;
synchronized (sendQueue) {
removed = sendQueue.remove(_m);
sendQueue.notifyAll();
}
if (removed)
_log.info("Took too long to send " + _m + " to " + peer);
}
}
/**
* Removes a particular message type from the queue.
*
* @param type the Message type to remove.
* @returns true when a message of the given type was removed, false
* otherwise.
*/
private boolean removeMessage(int type)
{
boolean removed = false;
synchronized(sendQueue)
{
Iterator it = sendQueue.iterator();
while (it.hasNext())
{
Message m = (Message)it.next();
if (m.type == type)
{
it.remove();
removed = true;
}
}
sendQueue.notifyAll();
}
return removed;
}
void sendAlive()
{
Message m = new Message();
m.type = Message.KEEP_ALIVE;
// addMessage(m);
synchronized(sendQueue)
{
if(sendQueue.isEmpty())
sendQueue.add(m);
sendQueue.notifyAll();
}
}
void sendChoke(boolean choke)
{
// We cancel the (un)choke but keep PIECE messages.
// PIECE messages are purged if a choke is actually send.
synchronized(sendQueue)
{
int inverseType = choke ? Message.UNCHOKE
: Message.CHOKE;
if (!removeMessage(inverseType))
{
Message m = new Message();
if (choke)
m.type = Message.CHOKE;
else
m.type = Message.UNCHOKE;
addMessage(m);
}
}
}
void sendInterest(boolean interest)
{
synchronized(sendQueue)
{
int inverseType = interest ? Message.UNINTERESTED
: Message.INTERESTED;
if (!removeMessage(inverseType))
{
Message m = new Message();
if (interest)
m.type = Message.INTERESTED;
else
m.type = Message.UNINTERESTED;
addMessage(m);
}
}
}
void sendHave(int piece)
{
Message m = new Message();
m.type = Message.HAVE;
m.piece = piece;
addMessage(m);
}
void sendBitfield(BitField bitfield)
{
Message m = new Message();
m.type = Message.BITFIELD;
m.data = bitfield.getFieldBytes();
m.off = 0;
m.len = m.data.length;
addMessage(m);
}
/** reransmit requests not received in 7m */
private static final int REQ_TIMEOUT = (2 * SEND_TIMEOUT) + (60 * 1000);
void retransmitRequests(List requests)
{
long now = System.currentTimeMillis();
Iterator it = requests.iterator();
while (it.hasNext())
{
Request req = (Request)it.next();
if(now > req.sendTime + REQ_TIMEOUT) {
if (_log.shouldLog(Log.DEBUG))
_log.debug("Retransmit request " + req + " to peer " + peer);
sendRequest(req);
}
}
}
void sendRequests(List requests)
{
Iterator it = requests.iterator();
while (it.hasNext())
{
Request req = (Request)it.next();
sendRequest(req);
}
}
void sendRequest(Request req)
{
// Check for duplicate requests to deal with fibrillating i2p-bt
// (multiple choke/unchokes received cause duplicate requests in the queue)
synchronized(sendQueue)
{
Iterator it = sendQueue.iterator();
while (it.hasNext())
{
Message m = (Message)it.next();
if (m.type == Message.REQUEST && m.piece == req.piece &&
m.begin == req.off && m.length == req.len)
{
if (_log.shouldLog(Log.DEBUG))
_log.debug("Discarding duplicate request " + req + " to peer " + peer);
return;
}
}
}
Message m = new Message();
m.type = Message.REQUEST;
m.piece = req.piece;
m.begin = req.off;
m.length = req.len;
addMessage(m);
req.sendTime = System.currentTimeMillis();
}
// Used by PeerState to limit pipelined requests
int queuedBytes()
{
int total = 0;
synchronized(sendQueue)
{
Iterator it = sendQueue.iterator();
while (it.hasNext())
{
Message m = (Message)it.next();
if (m.type == Message.PIECE)
total += m.length;
}
}
return total;
}
void sendPiece(int piece, int begin, int length, byte[] bytes)
{
Message m = new Message();
m.type = Message.PIECE;
m.piece = piece;
m.begin = begin;
m.length = length;
m.data = bytes;
m.off = 0;
m.len = length;
addMessage(m);
}
void sendCancel(Request req)
{
// See if it is still in our send queue
synchronized(sendQueue)
{
Iterator it = sendQueue.iterator();
while (it.hasNext())
{
Message m = (Message)it.next();
if (m.type == Message.REQUEST
&& m.piece == req.piece
&& m.begin == req.off
&& m.length == req.len)
it.remove();
}
}
// Always send, just to be sure it it is really canceled.
Message m = new Message();
m.type = Message.CANCEL;
m.piece = req.piece;
m.begin = req.off;
m.length = req.len;
addMessage(m);
}
// Called by the PeerState when the other side doesn't want this
// request to be handled anymore. Removes any pending Piece Message
// from out send queue.
void cancelRequest(int piece, int begin, int length)
{
synchronized (sendQueue)
{
Iterator it = sendQueue.iterator();
while (it.hasNext())
{
Message m = (Message)it.next();
if (m.type == Message.PIECE
&& m.piece == piece
&& m.begin == begin
&& m.length == length)
it.remove();
}
}
}
}

View File

@ -0,0 +1,869 @@
/* PeerCoordinator - Coordinates which peers do what (up and downloading).
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
import java.util.Random;
import java.util.Timer;
import net.i2p.util.I2PThread;
import net.i2p.util.Log;
/**
* Coordinates what peer does what.
*/
public class PeerCoordinator implements PeerListener
{
private final Log _log = new Log(PeerCoordinator.class);
final MetaInfo metainfo;
final Storage storage;
final Snark snark;
// package local for access by CheckDownLoadersTask
final static long CHECK_PERIOD = 40*1000; // 40 seconds
final static int MAX_CONNECTIONS = 16;
final static int MAX_UPLOADERS = 6;
// Approximation of the number of current uploaders.
// Resynced by PeerChecker once in a while.
int uploaders = 0;
int interestedAndChoking = 0;
// final static int MAX_DOWNLOADERS = MAX_CONNECTIONS;
// int downloaders = 0;
private long uploaded;
private long downloaded;
final static int RATE_DEPTH = 6; // make following arrays RATE_DEPTH long
private long uploaded_old[] = {-1,-1,-1,-1,-1,-1};
private long downloaded_old[] = {-1,-1,-1,-1,-1,-1};
// synchronize on this when changing peers or downloaders
final List peers = new ArrayList();
/** estimate of the peers, without requiring any synchronization */
volatile int peerCount;
/** Timer to handle all periodical tasks. */
private final Timer timer = new Timer(true);
private final byte[] id;
// Some random wanted pieces
private List wantedPieces;
private boolean halted = false;
private final CoordinatorListener listener;
public String trackerProblems = null;
public int trackerSeenPeers = 0;
public PeerCoordinator(byte[] id, MetaInfo metainfo, Storage storage,
CoordinatorListener listener, Snark torrent)
{
this.id = id;
this.metainfo = metainfo;
this.storage = storage;
this.listener = listener;
this.snark = torrent;
setWantedPieces();
// Install a timer to check the uploaders.
// Randomize the first start time so multiple tasks are spread out,
// this will help the behavior with global limits
Random r = new Random();
timer.schedule(new PeerCheckerTask(this), (CHECK_PERIOD / 2) + r.nextInt((int) CHECK_PERIOD), CHECK_PERIOD);
}
// only called externally from Storage after the double-check fails
public void setWantedPieces()
{
// Make a list of pieces
wantedPieces = new ArrayList();
BitField bitfield = storage.getBitField();
for(int i = 0; i < metainfo.getPieces(); i++)
if (!bitfield.get(i))
wantedPieces.add(new Piece(i));
Collections.shuffle(wantedPieces);
}
public Storage getStorage() { return storage; }
public CoordinatorListener getListener() { return listener; }
// for web page detailed stats
public List peerList()
{
synchronized(peers)
{
return new ArrayList(peers);
}
}
public byte[] getID()
{
return id;
}
public boolean completed()
{
return storage.complete();
}
public int getPeerCount() { return peerCount; }
public int getPeers()
{
synchronized(peers)
{
int rv = peers.size();
peerCount = rv;
return rv;
}
}
/**
* Returns how many bytes are still needed to get the complete file.
*/
public long getLeft()
{
// XXX - Only an approximation.
return ((long) storage.needed()) * metainfo.getPieceLength(0);
}
/**
* Returns the total number of uploaded bytes of all peers.
*/
public long getUploaded()
{
return uploaded;
}
/**
* Returns the total number of downloaded bytes of all peers.
*/
public long getDownloaded()
{
return downloaded;
}
/**
* Push the total uploaded/downloaded onto a RATE_DEPTH deep stack
*/
public void setRateHistory(long up, long down)
{
setRate(up, uploaded_old);
setRate(down, downloaded_old);
}
private static void setRate(long val, long array[])
{
synchronized(array) {
for (int i = RATE_DEPTH-1; i > 0; i--)
array[i] = array[i-1];
array[0] = val;
}
}
/**
* Returns the 4-minute-average rate in Bps
*/
public long getDownloadRate()
{
return getRate(downloaded_old);
}
public long getUploadRate()
{
return getRate(uploaded_old);
}
public long getCurrentUploadRate()
{
// no need to synchronize, only one value
long r = uploaded_old[0];
if (r <= 0)
return 0;
return (r * 1000) / CHECK_PERIOD;
}
private long getRate(long array[])
{
long rate = 0;
int i = 0;
synchronized(array) {
for ( ; i < RATE_DEPTH; i++) {
if (array[i] < 0)
break;
rate += array[i];
}
}
if (i == 0)
return 0;
return rate / (i * CHECK_PERIOD / 1000);
}
public MetaInfo getMetaInfo()
{
return metainfo;
}
public boolean needPeers()
{
synchronized(peers)
{
return !halted && peers.size() < MAX_CONNECTIONS;
}
}
public boolean halted() { return halted; }
public void halt()
{
halted = true;
List removed = new ArrayList();
synchronized(peers)
{
// Stop peer checker task.
timer.cancel();
// Stop peers.
removed.addAll(peers);
peers.clear();
peerCount = 0;
}
while (removed.size() > 0) {
Peer peer = (Peer)removed.remove(0);
peer.disconnect();
removePeerFromPieces(peer);
}
// delete any saved orphan partial piece
savedRequest = null;
}
public void connected(Peer peer)
{
if (halted)
{
peer.disconnect(false);
return;
}
Peer toDisconnect = null;
synchronized(peers)
{
Peer old = peerIDInList(peer.getPeerID(), peers);
if ( (old != null) && (old.getInactiveTime() > 8*60*1000) ) {
// idle for 8 minutes, kill the old con (32KB/8min = 68B/sec minimum for one block)
if (_log.shouldLog(Log.WARN))
_log.warn("Remomving old peer: " + peer + ": " + old + ", inactive for " + old.getInactiveTime());
peers.remove(old);
toDisconnect = old;
old = null;
}
if (old != null)
{
if (_log.shouldLog(Log.WARN))
_log.warn("Already connected to: " + peer + ": " + old + ", inactive for " + old.getInactiveTime());
// toDisconnect = peer to get out of synchronized(peers)
peer.disconnect(false); // Don't deregister this connection/peer.
}
// This is already checked in addPeer() but we could have gone over the limit since then
else if (peers.size() >= MAX_CONNECTIONS)
{
if (_log.shouldLog(Log.WARN))
_log.warn("Already at MAX_CONNECTIONS in connected() with peer: " + peer);
// toDisconnect = peer to get out of synchronized(peers)
peer.disconnect(false);
}
else
{
if (_log.shouldLog(Log.INFO))
_log.info("New connection to peer: " + peer + " for " + metainfo.getName());
// Add it to the beginning of the list.
// And try to optimistically make it a uploader.
peers.add(0, peer);
peerCount = peers.size();
unchokePeer();
if (listener != null)
listener.peerChange(this, peer);
}
}
if (toDisconnect != null) {
toDisconnect.disconnect(false);
removePeerFromPieces(toDisconnect);
}
}
// caller must synchronize on peers
private static Peer peerIDInList(PeerID pid, List peers)
{
Iterator it = peers.iterator();
while (it.hasNext()) {
Peer cur = (Peer)it.next();
if (pid.sameID(cur.getPeerID()))
return cur;
}
return null;
}
// returns true if actual attempt to add peer occurs
public boolean addPeer(final Peer peer)
{
if (halted)
{
peer.disconnect(false);
return false;
}
boolean need_more;
int peersize = 0;
synchronized(peers)
{
peersize = peers.size();
// This isn't a strict limit, as we may have several pending connections;
// thus there is an additional check in connected()
need_more = (!peer.isConnected()) && peersize < MAX_CONNECTIONS;
// Check if we already have this peer before we build the connection
Peer old = peerIDInList(peer.getPeerID(), peers);
need_more = need_more && ((old == null) || (old.getInactiveTime() > 8*60*1000));
}
if (need_more)
{
_log.debug("Adding a peer " + peer.getPeerID().getAddress().calculateHash().toBase64() + " for " + metainfo.getName(), new Exception("add/run"));
// Run the peer with us as listener and the current bitfield.
final PeerListener listener = this;
final BitField bitfield = storage.getBitField();
Runnable r = new Runnable()
{
public void run()
{
peer.runConnection(listener, bitfield);
}
};
String threadName = peer.toString();
new I2PThread(r, threadName).start();
return true;
}
if (_log.shouldLog(Log.DEBUG)) {
if (peer.isConnected())
_log.info("Add peer already connected: " + peer);
else
_log.info("Connections: " + peersize + "/" + MAX_CONNECTIONS
+ " not accepting extra peer: " + peer);
}
return false;
}
// (Optimistically) unchoke. Should be called with peers synchronized
void unchokePeer()
{
// linked list will contain all interested peers that we choke.
// At the start are the peers that have us unchoked at the end the
// other peer that are interested, but are choking us.
List interested = new LinkedList();
synchronized (peers) {
int count = 0;
int unchokedCount = 0;
int maxUploaders = allowedUploaders();
Iterator it = peers.iterator();
while (it.hasNext())
{
Peer peer = (Peer)it.next();
if (peer.isChoking() && peer.isInterested())
{
count++;
if (uploaders < maxUploaders)
{
if (!peer.isChoked())
interested.add(unchokedCount++, peer);
else
interested.add(peer);
}
}
}
while (uploaders < maxUploaders && interested.size() > 0)
{
Peer peer = (Peer)interested.remove(0);
if (_log.shouldLog(Log.DEBUG))
_log.debug("Unchoke: " + peer);
peer.setChoking(false);
uploaders++;
count--;
// Put peer back at the end of the list.
peers.remove(peer);
peers.add(peer);
peerCount = peers.size();
}
interestedAndChoking = count;
}
}
public byte[] getBitMap()
{
return storage.getBitField().getFieldBytes();
}
/**
* Returns true if we don't have the given piece yet.
*/
public boolean gotHave(Peer peer, int piece)
{
if (listener != null)
listener.peerChange(this, peer);
synchronized(wantedPieces)
{
return wantedPieces.contains(new Piece(piece));
}
}
/**
* Returns true if the given bitfield contains at least one piece we
* are interested in.
*/
public boolean gotBitField(Peer peer, BitField bitfield)
{
if (listener != null)
listener.peerChange(this, peer);
synchronized(wantedPieces)
{
Iterator it = wantedPieces.iterator();
while (it.hasNext())
{
Piece p = (Piece)it.next();
int i = p.getId();
if (bitfield.get(i)) {
p.addPeer(peer);
return true;
}
}
}
return false;
}
/**
* Returns one of pieces in the given BitField that is still wanted or
* -1 if none of the given pieces are wanted.
*/
public int wantPiece(Peer peer, BitField havePieces)
{
if (halted) {
if (_log.shouldLog(Log.WARN))
_log.warn("We don't want anything from the peer, as we are halted! peer=" + peer);
return -1;
}
synchronized(wantedPieces)
{
Piece piece = null;
Collections.sort(wantedPieces); // Sort in order of rarest first.
List requested = new ArrayList();
Iterator it = wantedPieces.iterator();
while (piece == null && it.hasNext())
{
Piece p = (Piece)it.next();
if (havePieces.get(p.getId()) && !p.isRequested())
{
piece = p;
}
else if (p.isRequested())
{
requested.add(p);
}
}
//Only request a piece we've requested before if there's no other choice.
if (piece == null) {
// let's not all get on the same piece
Collections.shuffle(requested);
Iterator it2 = requested.iterator();
while (piece == null && it2.hasNext())
{
Piece p = (Piece)it2.next();
if (havePieces.get(p.getId()))
{
piece = p;
}
}
if (piece == null) {
if (_log.shouldLog(Log.WARN))
_log.warn("nothing to even rerequest from " + peer + ": requested = " + requested);
// _log.warn("nothing to even rerequest from " + peer + ": requested = " + requested
// + " wanted = " + wantedPieces + " peerHas = " + havePieces);
return -1; //If we still can't find a piece we want, so be it.
} else {
// Should be a lot smarter here - limit # of parallel attempts and
// share blocks rather than starting from 0 with each peer.
// This is where the flaws of the snark data model are really exposed.
// Could also randomize within the duplicate set rather than strict rarest-first
if (_log.shouldLog(Log.DEBUG))
_log.debug("parallel request (end game?) for " + peer + ": piece = " + piece);
}
}
piece.setRequested(true);
return piece.getId();
}
}
/**
* Returns a byte array containing the requested piece or null of
* the piece is unknown.
*/
public byte[] gotRequest(Peer peer, int piece, int off, int len)
{
if (halted)
return null;
try
{
return storage.getPiece(piece, off, len);
}
catch (IOException ioe)
{
snark.stopTorrent();
_log.error("Error reading the storage for " + metainfo.getName(), ioe);
throw new RuntimeException("B0rked");
}
}
/**
* Called when a peer has uploaded some bytes of a piece.
*/
public void uploaded(Peer peer, int size)
{
uploaded += size;
if (listener != null)
listener.peerChange(this, peer);
}
/**
* Called when a peer has downloaded some bytes of a piece.
*/
public void downloaded(Peer peer, int size)
{
downloaded += size;
if (listener != null)
listener.peerChange(this, peer);
}
/**
* Returns false if the piece is no good (according to the hash).
* In that case the peer that supplied the piece should probably be
* blacklisted.
*/
public boolean gotPiece(Peer peer, int piece, byte[] bs)
{
if (halted) {
_log.info("Got while-halted piece " + piece + "/" + metainfo.getPieces() +" from " + peer + " for " + metainfo.getName());
return true; // We don't actually care anymore.
}
synchronized(wantedPieces)
{
Piece p = new Piece(piece);
if (!wantedPieces.contains(p))
{
_log.info("Got unwanted piece " + piece + "/" + metainfo.getPieces() +" from " + peer + " for " + metainfo.getName());
// No need to announce have piece to peers.
// Assume we got a good piece, we don't really care anymore.
return true;
}
try
{
if (storage.putPiece(piece, bs))
{
_log.info("Got valid piece " + piece + "/" + metainfo.getPieces() +" from " + peer + " for " + metainfo.getName());
}
else
{
// Oops. We didn't actually download this then... :(
downloaded -= metainfo.getPieceLength(piece);
_log.warn("Got BAD piece " + piece + "/" + metainfo.getPieces() + " from " + peer + " for " + metainfo.getName());
return false; // No need to announce BAD piece to peers.
}
}
catch (IOException ioe)
{
snark.stopTorrent();
_log.error("Error writing storage for " + metainfo.getName(), ioe);
throw new RuntimeException("B0rked");
}
wantedPieces.remove(p);
}
// Announce to the world we have it!
// Disconnect from other seeders when we get the last piece
synchronized(peers)
{
List toDisconnect = new ArrayList();
Iterator it = peers.iterator();
while (it.hasNext())
{
Peer p = (Peer)it.next();
if (p.isConnected())
{
if (completed() && p.isCompleted())
toDisconnect.add(p);
else
p.have(piece);
}
}
it = toDisconnect.iterator();
while (it.hasNext())
{
Peer p = (Peer)it.next();
p.disconnect(true);
}
}
return true;
}
public void gotChoke(Peer peer, boolean choke)
{
if (_log.shouldLog(Log.INFO))
_log.info("Got choke(" + choke + "): " + peer);
if (listener != null)
listener.peerChange(this, peer);
}
public void gotInterest(Peer peer, boolean interest)
{
if (interest)
{
synchronized(peers)
{
if (uploaders < allowedUploaders())
{
if(peer.isChoking())
{
uploaders++;
peer.setChoking(false);
if (_log.shouldLog(Log.INFO))
_log.info("Unchoke: " + peer);
}
}
}
}
if (listener != null)
listener.peerChange(this, peer);
}
public void disconnected(Peer peer)
{
if (_log.shouldLog(Log.INFO))
_log.info("Disconnected " + peer, new Exception("Disconnected by"));
synchronized(peers)
{
// Make sure it is no longer in our lists
if (peers.remove(peer))
{
// Unchoke some random other peer
unchokePeer();
removePeerFromPieces(peer);
}
peerCount = peers.size();
}
if (listener != null)
listener.peerChange(this, peer);
}
/** Called when a peer is removed, to prevent it from being used in
* rarest-first calculations.
*/
public void removePeerFromPieces(Peer peer) {
synchronized(wantedPieces) {
for(Iterator iter = wantedPieces.iterator(); iter.hasNext(); ) {
Piece piece = (Piece)iter.next();
piece.removePeer(peer);
}
}
}
/** Simple method to save a partial piece on peer disconnection
* and hopefully restart it later.
* Only one partial piece is saved at a time.
* Replace it if a new one is bigger or the old one is too old.
* Storage method is private so we can expand to save multiple partials
* if we wish.
*/
private Request savedRequest = null;
private long savedRequestTime = 0;
public void savePeerPartial(PeerState state)
{
if (halted)
return;
Request req = state.getPartialRequest();
if (req == null)
return;
if (savedRequest == null ||
req.off > savedRequest.off ||
System.currentTimeMillis() > savedRequestTime + (15 * 60 * 1000)) {
if (savedRequest == null || (req.piece != savedRequest.piece && req.off != savedRequest.off)) {
if (_log.shouldLog(Log.DEBUG)) {
_log.debug(" Saving orphaned partial piece " + req);
if (savedRequest != null)
_log.debug(" (Discarding previously saved orphan) " + savedRequest);
}
}
savedRequest = req;
savedRequestTime = System.currentTimeMillis();
} else {
if (req.piece != savedRequest.piece)
if (_log.shouldLog(Log.DEBUG))
_log.debug(" Discarding orphaned partial piece " + req);
}
}
/** Return partial piece if it's still wanted and peer has it.
*/
public Request getPeerPartial(BitField havePieces) {
if (savedRequest == null)
return null;
if (! havePieces.get(savedRequest.piece)) {
if (_log.shouldLog(Log.DEBUG))
_log.debug("Peer doesn't have orphaned piece " + savedRequest);
return null;
}
synchronized(wantedPieces)
{
for(Iterator iter = wantedPieces.iterator(); iter.hasNext(); ) {
Piece piece = (Piece)iter.next();
if (piece.getId() == savedRequest.piece) {
Request req = savedRequest;
piece.setRequested(true);
if (_log.shouldLog(Log.DEBUG))
_log.debug("Restoring orphaned partial piece " + req);
savedRequest = null;
return req;
}
}
}
if (_log.shouldLog(Log.DEBUG))
_log.debug("We no longer want orphaned piece " + savedRequest);
savedRequest = null;
return null;
}
/** Clear the requested flag for a piece if the peer
** is the only one requesting it
*/
private void markUnrequestedIfOnlyOne(Peer peer, int piece)
{
// see if anybody else is requesting
synchronized (peers)
{
Iterator it = peers.iterator();
while (it.hasNext()) {
Peer p = (Peer)it.next();
if (p.equals(peer))
continue;
if (p.state == null)
continue;
int[] arr = p.state.getRequestedPieces();
for (int i = 0; arr[i] >= 0; i++)
if(arr[i] == piece) {
if (_log.shouldLog(Log.DEBUG))
_log.debug("Another peer is requesting piece " + piece);
return;
}
}
}
// nobody is, so mark unrequested
synchronized(wantedPieces)
{
Iterator it = wantedPieces.iterator();
while (it.hasNext()) {
Piece p = (Piece)it.next();
if (p.getId() == piece) {
p.setRequested(false);
if (_log.shouldLog(Log.DEBUG))
_log.debug("Removing from request list piece " + piece);
return;
}
}
}
}
/** Mark a peer's requested pieces unrequested when it is disconnected
** Once for each piece
** This is enough trouble, maybe would be easier just to regenerate
** the requested list from scratch instead.
*/
public void markUnrequested(Peer peer)
{
if (halted || peer.state == null)
return;
int[] arr = peer.state.getRequestedPieces();
for (int i = 0; arr[i] >= 0; i++)
markUnrequestedIfOnlyOne(peer, arr[i]);
}
/** Return number of allowed uploaders for this torrent.
** Check with Snark to see if we are over the total upload limit.
*/
public int allowedUploaders()
{
if (Snark.overUploadLimit(uploaders)) {
// if (_log.shouldLog(Log.DEBUG))
// _log.debug("Over limit, uploaders was: " + uploaders);
return uploaders - 1;
} else if (uploaders < MAX_UPLOADERS)
return uploaders + 1;
else
return MAX_UPLOADERS;
}
public boolean overUpBWLimit()
{
return Snark.overUpBWLimit();
}
public boolean overUpBWLimit(long total)
{
return Snark.overUpBWLimit(total * 1000 / CHECK_PERIOD);
}
}

View File

@ -0,0 +1,39 @@
package org.klomp.snark;
import java.util.ArrayList;
import java.util.HashSet;
import java.util.Iterator;
import java.util.Set;
/**
* Hmm, any guesses as to what this is? Used by the multitorrent functionality
* in the PeerAcceptor to pick the right PeerCoordinator to accept the con for.
* Each PeerCoordinator is added to the set from within the Snark (and removed
* from it there too)
*/
public class PeerCoordinatorSet {
private static final PeerCoordinatorSet _instance = new PeerCoordinatorSet();
public static final PeerCoordinatorSet instance() { return _instance; }
private Set _coordinators;
private PeerCoordinatorSet() {
_coordinators = new HashSet();
}
public Iterator iterator() {
synchronized (_coordinators) {
return new ArrayList(_coordinators).iterator();
}
}
public void add(PeerCoordinator coordinator) {
synchronized (_coordinators) {
_coordinators.add(coordinator);
}
}
public void remove(PeerCoordinator coordinator) {
synchronized (_coordinators) {
_coordinators.remove(coordinator);
}
}
}

View File

@ -0,0 +1,210 @@
/* PeerID - All public information concerning a peer.
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark;
import java.io.IOException;
import java.net.UnknownHostException;
import java.util.Map;
import net.i2p.data.Base64;
import net.i2p.data.Destination;
import org.klomp.snark.bencode.BDecoder;
import org.klomp.snark.bencode.BEValue;
import org.klomp.snark.bencode.InvalidBEncodingException;
public class PeerID implements Comparable
{
private final byte[] id;
private final Destination address;
private final int port;
private final int hash;
public PeerID(byte[] id, Destination address)
{
this.id = id;
this.address = address;
this.port = 6881;
hash = calculateHash();
}
/**
* Creates a PeerID from a BDecoder.
*/
public PeerID(BDecoder be)
throws IOException
{
this(be.bdecodeMap().getMap());
}
/**
* Creates a PeerID from a Map containing BEncoded peer id, ip and
* port.
*/
public PeerID(Map m)
throws InvalidBEncodingException, UnknownHostException
{
BEValue bevalue = (BEValue)m.get("peer id");
if (bevalue == null)
throw new InvalidBEncodingException("peer id missing");
id = bevalue.getBytes();
bevalue = (BEValue)m.get("ip");
if (bevalue == null)
throw new InvalidBEncodingException("ip missing");
address = I2PSnarkUtil.instance().getDestination(bevalue.getString());
if (address == null)
throw new InvalidBEncodingException("Invalid destination [" + bevalue.getString() + "]");
port = 6881;
hash = calculateHash();
}
public byte[] getID()
{
return id;
}
public Destination getAddress()
{
return address;
}
public int getPort()
{
return port;
}
private int calculateHash()
{
int b = 0;
for (int i = 0; i < id.length; i++)
b ^= id[i];
return (b ^ address.hashCode()) ^ port;
}
/**
* The hash code of a PeerID is the exclusive or of all id bytes.
*/
public int hashCode()
{
return hash;
}
/**
* Returns true if and only if this peerID and the given peerID have
* the same 20 bytes as ID.
*/
public boolean sameID(PeerID pid)
{
boolean equal = true;
for (int i = 0; equal && i < id.length; i++)
equal = id[i] == pid.id[i];
return equal;
}
/**
* Two PeerIDs are equal when they have the same id, address and port.
*/
public boolean equals(Object o)
{
if (o instanceof PeerID)
{
PeerID pid = (PeerID)o;
return port == pid.port
&& address.equals(pid.address)
&& sameID(pid);
}
else
return false;
}
/**
* Compares port, address and id.
*/
public int compareTo(Object o)
{
PeerID pid = (PeerID)o;
int result = port - pid.port;
if (result != 0)
return result;
result = address.hashCode() - pid.address.hashCode();
if (result != 0)
return result;
for (int i = 0; i < id.length; i++)
{
result = id[i] - pid.id[i];
if (result != 0)
return result;
}
return 0;
}
/**
* Returns the String "id@address" where id is the base64 encoded id
* and address is the base64 dest (was the base64 hash of the dest) which
* should match what the bytemonsoon tracker reports on its web pages.
*/
public String toString()
{
int nonZero = 0;
for (int i = 0; i < id.length; i++) {
if (id[i] != 0) {
nonZero = i;
break;
}
}
return Base64.encode(id, nonZero, id.length-nonZero).substring(0,4) + "@" + address.toBase64().substring(0,6);
}
/**
* Encode an id as a hex encoded string and remove leading zeros.
*/
public static String idencode(byte[] bs)
{
boolean leading_zeros = true;
StringBuffer sb = new StringBuffer(bs.length*2);
for (int i = 0; i < bs.length; i++)
{
int c = bs[i] & 0xFF;
if (leading_zeros && c == 0)
continue;
else
leading_zeros = false;
if (c < 16)
sb.append('0');
sb.append(Integer.toHexString(c));
}
return sb.toString();
}
}

View File

@ -0,0 +1,172 @@
/* PeerListener - Interface for listening to peer events.
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark;
/**
* Listener for Peer events.
*/
public interface PeerListener
{
/**
* Called when the connection to the peer has started and the
* handshake was successfull.
*
* @param peer the Peer that just got connected.
*/
void connected(Peer peer);
/**
* Called when the connection to the peer was terminated or the
* connection handshake failed.
*
* @param peer the Peer that just got disconnected.
*/
void disconnected(Peer peer);
/**
* Called when a choke message is received.
*
* @param peer the Peer that got the message.
* @param choke true when the peer got a choke message, false when
* the peer got an unchoke message.
*/
void gotChoke(Peer peer, boolean choke);
/**
* Called when an interested message is received.
*
* @param peer the Peer that got the message.
* @param interest true when the peer got a interested message, false when
* the peer got an uninterested message.
*/
void gotInterest(Peer peer, boolean interest);
/**
* Called when a have piece message is received. If the method
* returns true and the peer has not yet received a interested
* message or we indicated earlier to be not interested then an
* interested message will be send.
*
* @param peer the Peer that got the message.
* @param piece the piece number that the per just got.
*
* @return true when it is a piece that we want, false if the piece is
* already known.
*/
boolean gotHave(Peer peer, int piece);
/**
* Called when a bitmap message is received. If this method returns
* true a interested message will be send back to the peer.
*
* @param peer the Peer that got the message.
* @param bitfield a BitField containing the pieces that the other
* side has.
*
* @return true when the BitField contains pieces we want, false if
* the piece is already known.
*/
boolean gotBitField(Peer peer, BitField bitfield);
/**
* Called when a piece is received from the peer. The piece must be
* requested by Peer.request() first. If this method returns false
* that means the Peer provided a corrupted piece and the connection
* will be closed.
*
* @param peer the Peer that got the piece.
* @param piece the piece number received.
* @param bs the byte array containing the piece.
*
* @return true when the bytes represent the piece, false otherwise.
*/
boolean gotPiece(Peer peer, int piece, byte[] bs);
/**
* Called when the peer wants (part of) a piece from us. Only called
* when the peer is not choked by us (<code>peer.choke(false)</code>
* was called).
*
* @param peer the Peer that wants the piece.
* @param piece the piece number requested.
* @param off byte offset into the piece.
* @param len length of the chunk requested.
*
* @return a byte array containing the piece or null when the piece
* is not available (which is a protocol error).
*/
byte[] gotRequest(Peer peer, int piece, int off, int len);
/**
* Called when a (partial) piece has been downloaded from the peer.
*
* @param peer the Peer from which size bytes where downloaded.
* @param size the number of bytes that where downloaded.
*/
void downloaded(Peer peer, int size);
/**
* Called when a (partial) piece has been uploaded to the peer.
*
* @param peer the Peer to which size bytes where uploaded.
* @param size the number of bytes that where uploaded.
*/
void uploaded(Peer peer, int size);
/**
* Called when we are downloading from the peer and need to ask for
* a new piece. Might be called multiple times before
* <code>gotPiece()</code> is called.
*
* @param peer the Peer that will be asked to provide the piece.
* @param bitfield a BitField containing the pieces that the other
* side has.
*
* @return one of the pieces from the bitfield that we want or -1 if
* we are no longer interested in the peer.
*/
int wantPiece(Peer peer, BitField bitfield);
/**
* Called when the peer has disconnected and the peer task may have a partially
* downloaded piece that the PeerCoordinator can save
*
* @param state the PeerState for the peer
*/
void savePeerPartial(PeerState state);
/**
* Called when a peer has connected and there may be a partially
* downloaded piece that the coordinatorator can give the peer task
*
* @param havePieces the have-pieces bitmask for the peer
*
* @return request (contains the partial data and valid length)
*/
Request getPeerPartial(BitField havePieces);
/** Mark a peer's requested pieces unrequested when it is disconnected
* This prevents premature end game
*
* @param peer the peer that is disconnecting
*/
void markUnrequested(Peer peer);
}

View File

@ -0,0 +1,129 @@
/* PeerMonitorTasks - TimerTask that monitors the peers and total up/down speed
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark;
import java.util.Iterator;
import java.util.TimerTask;
/**
* TimerTask that monitors the peers and total up/download speeds.
* Works together with the main Snark class to report periodical statistics.
*/
class PeerMonitorTask extends TimerTask
{
final static long MONITOR_PERIOD = 10 * 1000; // Ten seconds.
private final long KILOPERSECOND = 1024 * (MONITOR_PERIOD / 1000);
private final PeerCoordinator coordinator;
private long lastDownloaded = 0;
private long lastUploaded = 0;
PeerMonitorTask(PeerCoordinator coordinator)
{
this.coordinator = coordinator;
}
public void run()
{
// Get some statistics
int peers = 0;
int uploaders = 0;
int downloaders = 0;
int interested = 0;
int interesting = 0;
int choking = 0;
int choked = 0;
synchronized(coordinator.peers)
{
Iterator it = coordinator.peers.iterator();
while (it.hasNext())
{
Peer peer = (Peer)it.next();
// Don't list dying peers
if (!peer.isConnected())
continue;
peers++;
if (!peer.isChoking())
uploaders++;
if (!peer.isChoked() && peer.isInteresting())
downloaders++;
if (peer.isInterested())
interested++;
if (peer.isInteresting())
interesting++;
if (peer.isChoking())
choking++;
if (peer.isChoked())
choked++;
}
}
// Print some statistics
long downloaded = coordinator.getDownloaded();
String totalDown;
if (downloaded >= 10 * 1024 * 1024)
totalDown = (downloaded / (1024 * 1024)) + "MB";
else
totalDown = (downloaded / 1024 )+ "KB";
long uploaded = coordinator.getUploaded();
String totalUp;
if (uploaded >= 10 * 1024 * 1024)
totalUp = (uploaded / (1024 * 1024)) + "MB";
else
totalUp = (uploaded / 1024) + "KB";
int needP = coordinator.storage.needed();
long needMB
= needP * coordinator.metainfo.getPieceLength(0) / (1024 * 1024);
int totalP = coordinator.metainfo.getPieces();
long totalMB = coordinator.metainfo.getTotalLength() / (1024 * 1024);
System.out.println();
System.out.println("Down: "
+ (downloaded - lastDownloaded) / KILOPERSECOND
+ "KB/s"
+ " (" + totalDown + ")"
+ " Up: "
+ (uploaded - lastUploaded) / KILOPERSECOND
+ "KB/s"
+ " (" + totalUp + ")"
+ " Need " + needP
+ " (" + needMB + "MB)"
+ " of " + totalP
+ " (" + totalMB + "MB)"
+ " pieces");
System.out.println(peers + ": Download #" + downloaders
+ " Upload #" + uploaders
+ " Interested #" + interested
+ " Interesting #" + interesting
+ " Choking #" + choking
+ " Choked #" + choked);
System.out.println();
lastDownloaded = downloaded;
lastUploaded = uploaded;
}
}

View File

@ -0,0 +1,608 @@
/* PeerState - Keeps track of the Peer state through connection callbacks.
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import net.i2p.util.Log;
class PeerState
{
private Log _log = new Log(PeerState.class);
final Peer peer;
final PeerListener listener;
final MetaInfo metainfo;
// Interesting and choking describes whether we are interested in or
// are choking the other side.
boolean interesting = false;
boolean choking = true;
// Interested and choked describes whether the other side is
// interested in us or choked us.
boolean interested = false;
boolean choked = true;
// Package local for use by Peer.
long downloaded;
long uploaded;
BitField bitfield;
// Package local for use by Peer.
final PeerConnectionIn in;
final PeerConnectionOut out;
// Outstanding request
private final List outstandingRequests = new ArrayList();
private Request lastRequest = null;
// If we have te resend outstanding requests (true after we got choked).
private boolean resend = false;
private final static int MAX_PIPELINE = 2; // this is for outbound requests
private final static int MAX_PIPELINE_BYTES = 128*1024; // this is for inbound requests
public final static int PARTSIZE = 32*1024; // Snark was 16K, i2p-bt uses 64KB
private final static int MAX_PARTSIZE = 64*1024; // Don't let anybody request more than this
PeerState(Peer peer, PeerListener listener, MetaInfo metainfo,
PeerConnectionIn in, PeerConnectionOut out)
{
this.peer = peer;
this.listener = listener;
this.metainfo = metainfo;
this.in = in;
this.out = out;
}
// NOTE Methods that inspect or change the state synchronize (on this).
void keepAliveMessage()
{
if (_log.shouldLog(Log.DEBUG))
_log.debug(peer + " rcv alive");
/* XXX - ignored */
}
void chokeMessage(boolean choke)
{
if (_log.shouldLog(Log.DEBUG))
_log.debug(peer + " rcv " + (choke ? "" : "un") + "choked");
choked = choke;
if (choked)
resend = true;
listener.gotChoke(peer, choke);
if (!choked && interesting)
request();
}
void interestedMessage(boolean interest)
{
if (_log.shouldLog(Log.DEBUG))
_log.debug(peer + " rcv " + (interest ? "" : "un")
+ "interested");
interested = interest;
listener.gotInterest(peer, interest);
}
void haveMessage(int piece)
{
if (_log.shouldLog(Log.DEBUG))
_log.debug(peer + " rcv have(" + piece + ")");
// Sanity check
if (piece < 0 || piece >= metainfo.getPieces())
{
// XXX disconnect?
if (_log.shouldLog(Log.WARN))
_log.warn("Got strange 'have: " + piece + "' message from " + peer);
return;
}
synchronized(this)
{
// Can happen if the other side never send a bitfield message.
if (bitfield == null)
bitfield = new BitField(metainfo.getPieces());
bitfield.set(piece);
}
if (listener.gotHave(peer, piece))
setInteresting(true);
}
void bitfieldMessage(byte[] bitmap)
{
synchronized(this)
{
if (_log.shouldLog(Log.DEBUG))
_log.debug(peer + " rcv bitfield");
if (bitfield != null)
{
// XXX - Be liberal in what you except?
if (_log.shouldLog(Log.WARN))
_log.warn("Got unexpected bitfield message from " + peer);
return;
}
// XXX - Check for weird bitfield and disconnect?
bitfield = new BitField(bitmap, metainfo.getPieces());
}
setInteresting(listener.gotBitField(peer, bitfield));
}
void requestMessage(int piece, int begin, int length)
{
if (_log.shouldLog(Log.DEBUG))
_log.debug(peer + " rcv request("
+ piece + ", " + begin + ", " + length + ") ");
if (choking)
{
if (_log.shouldLog(Log.INFO))
_log.info("Request received, but choking " + peer);
return;
}
// Sanity check
if (piece < 0
|| piece >= metainfo.getPieces()
|| begin < 0
|| begin > metainfo.getPieceLength(piece)
|| length <= 0
|| length > MAX_PARTSIZE)
{
// XXX - Protocol error -> disconnect?
if (_log.shouldLog(Log.WARN))
_log.warn("Got strange 'request: " + piece
+ ", " + begin
+ ", " + length
+ "' message from " + peer);
return;
}
// Limit total pipelined requests to MAX_PIPELINE bytes
// to conserve memory and prevent DOS
if (out.queuedBytes() + length > MAX_PIPELINE_BYTES)
{
if (_log.shouldLog(Log.WARN))
_log.warn("Discarding request over pipeline limit from " + peer);
return;
}
byte[] pieceBytes = listener.gotRequest(peer, piece, begin, length);
if (pieceBytes == null)
{
// XXX - Protocol error-> diconnect?
if (_log.shouldLog(Log.WARN))
_log.warn("Got request for unknown piece: " + piece);
return;
}
// More sanity checks
if (length != pieceBytes.length)
{
// XXX - Protocol error-> disconnect?
if (_log.shouldLog(Log.WARN))
_log.warn("Got out of range 'request: " + piece
+ ", " + begin
+ ", " + length
+ "' message from " + peer);
return;
}
if (_log.shouldLog(Log.INFO))
_log.info("Sending (" + piece + ", " + begin + ", "
+ length + ")" + " to " + peer);
out.sendPiece(piece, begin, length, pieceBytes);
}
/**
* Called when some bytes have left the outgoing connection.
* XXX - Should indicate whether it was a real piece or overhead.
*/
void uploaded(int size)
{
uploaded += size;
listener.uploaded(peer, size);
}
// This is used to flag that we have to back up from the firstOutstandingRequest
// when calculating how far we've gotten
Request pendingRequest = null;
/**
* Called when a partial piece request has been handled by
* PeerConnectionIn.
*/
void pieceMessage(Request req)
{
int size = req.len;
downloaded += size;
listener.downloaded(peer, size);
pendingRequest = null;
// Last chunk needed for this piece?
if (getFirstOutstandingRequest(req.piece) == -1)
{
if (listener.gotPiece(peer, req.piece, req.bs))
{
if (_log.shouldLog(Log.DEBUG))
_log.debug("Got " + req.piece + ": " + peer);
}
else
{
if (_log.shouldLog(Log.WARN))
_log.warn("Got BAD " + req.piece + " from " + peer);
// XXX ARGH What now !?!
downloaded = 0;
}
}
}
synchronized private int getFirstOutstandingRequest(int piece)
{
for (int i = 0; i < outstandingRequests.size(); i++)
if (((Request)outstandingRequests.get(i)).piece == piece)
return i;
return -1;
}
/**
* Called when a piece message is being processed by the incoming
* connection. Returns null when there was no such request. It also
* requeues/sends requests when it thinks that they must have been
* lost.
*/
Request getOutstandingRequest(int piece, int begin, int length)
{
if (_log.shouldLog(Log.DEBUG))
_log.debug("getChunk("
+ piece + "," + begin + "," + length + ") "
+ peer);
int r = getFirstOutstandingRequest(piece);
// Unrequested piece number?
if (r == -1)
{
if (_log.shouldLog(Log.INFO))
_log.info("Unrequested 'piece: " + piece + ", "
+ begin + ", " + length + "' received from "
+ peer);
downloaded = 0; // XXX - punishment?
return null;
}
// Lookup the correct piece chunk request from the list.
Request req;
synchronized(this)
{
req = (Request)outstandingRequests.get(r);
while (req.piece == piece && req.off != begin
&& r < outstandingRequests.size() - 1)
{
r++;
req = (Request)outstandingRequests.get(r);
}
// Something wrong?
if (req.piece != piece || req.off != begin || req.len != length)
{
if (_log.shouldLog(Log.INFO))
_log.info("Unrequested or unneeded 'piece: "
+ piece + ", "
+ begin + ", "
+ length + "' received from "
+ peer);
downloaded = 0; // XXX - punishment?
return null;
}
// Report missing requests.
if (r != 0)
{
if (_log.shouldLog(Log.WARN))
_log.warn("Some requests dropped, got " + req
+ ", wanted for peer: " + peer);
for (int i = 0; i < r; i++)
{
Request dropReq = (Request)outstandingRequests.remove(0);
outstandingRequests.add(dropReq);
if (!choked)
out.sendRequest(dropReq);
if (_log.shouldLog(Log.WARN))
_log.warn("dropped " + dropReq + " with peer " + peer);
}
}
outstandingRequests.remove(0);
}
// Request more if necessary to keep the pipeline filled.
addRequest();
pendingRequest = req;
return req;
}
// get longest partial piece
Request getPartialRequest()
{
Request req = null;
for (int i = 0; i < outstandingRequests.size(); i++) {
Request r1 = (Request)outstandingRequests.get(i);
int j = getFirstOutstandingRequest(r1.piece);
if (j == -1)
continue;
Request r2 = (Request)outstandingRequests.get(j);
if (r2.off > 0 && ((req == null) || (r2.off > req.off)))
req = r2;
}
if (pendingRequest != null && req != null && pendingRequest.off < req.off) {
if (pendingRequest.off != 0)
req = pendingRequest;
else
req = null;
}
return req;
}
// return array of pieces terminated by -1
// remove most duplicates
// but still could be some duplicates, not guaranteed
int[] getRequestedPieces()
{
int size = outstandingRequests.size();
int[] arr = new int[size+2];
int pc = -1;
int pos = 0;
if (pendingRequest != null) {
pc = pendingRequest.piece;
arr[pos++] = pc;
}
Request req = null;
for (int i = 0; i < size; i++) {
Request r1 = (Request)outstandingRequests.get(i);
if (pc != r1.piece) {
pc = r1.piece;
arr[pos++] = pc;
}
}
arr[pos] = -1;
return(arr);
}
void cancelMessage(int piece, int begin, int length)
{
if (_log.shouldLog(Log.DEBUG))
_log.debug("Got cancel message ("
+ piece + ", " + begin + ", " + length + ")");
out.cancelRequest(piece, begin, length);
}
void unknownMessage(int type, byte[] bs)
{
if (_log.shouldLog(Log.WARN))
_log.warn("Warning: Ignoring unknown message type: " + type
+ " length: " + bs.length);
}
void havePiece(int piece)
{
if (_log.shouldLog(Log.DEBUG))
_log.debug("Tell " + peer + " havePiece(" + piece + ")");
synchronized(this)
{
// Tell the other side that we are no longer interested in any of
// the outstanding requests for this piece.
if (lastRequest != null && lastRequest.piece == piece)
lastRequest = null;
Iterator it = outstandingRequests.iterator();
while (it.hasNext())
{
Request req = (Request)it.next();
if (req.piece == piece)
{
it.remove();
// Send cancel even when we are choked to make sure that it is
// really never ever send.
out.sendCancel(req);
}
}
}
// Tell the other side that we really have this piece.
out.sendHave(piece);
// Request something else if necessary.
addRequest();
synchronized(this)
{
// Is the peer still interesting?
if (lastRequest == null)
setInteresting(false);
}
}
// Starts or resumes requesting pieces.
private void request()
{
// Are there outstanding requests that have to be resend?
if (resend)
{
synchronized (this) {
out.sendRequests(outstandingRequests);
}
resend = false;
}
// Add/Send some more requests if necessary.
addRequest();
}
/**
* Adds a new request to the outstanding requests list.
*/
synchronized private void addRequest()
{
boolean more_pieces = true;
while (more_pieces)
{
more_pieces = outstandingRequests.size() < MAX_PIPELINE;
// We want something and we don't have outstanding requests?
if (more_pieces && lastRequest == null)
more_pieces = requestNextPiece();
else if (more_pieces) // We want something
{
int pieceLength;
boolean isLastChunk;
pieceLength = metainfo.getPieceLength(lastRequest.piece);
isLastChunk = lastRequest.off + lastRequest.len == pieceLength;
// Last part of a piece?
if (isLastChunk)
more_pieces = requestNextPiece();
else
{
int nextPiece = lastRequest.piece;
int nextBegin = lastRequest.off + PARTSIZE;
byte[] bs = lastRequest.bs;
int maxLength = pieceLength - nextBegin;
int nextLength = maxLength > PARTSIZE ? PARTSIZE
: maxLength;
Request req
= new Request(nextPiece, bs, nextBegin, nextLength);
outstandingRequests.add(req);
if (!choked)
out.sendRequest(req);
lastRequest = req;
}
}
}
if (_log.shouldLog(Log.DEBUG))
_log.debug(peer + " requests " + outstandingRequests);
}
// Starts requesting first chunk of next piece. Returns true if
// something has been added to the requests, false otherwise.
private boolean requestNextPiece()
{
// Check that we already know what the other side has.
if (bitfield != null)
{
// Check for adopting an orphaned partial piece
Request r = listener.getPeerPartial(bitfield);
if (r != null) {
// Check that r not already in outstandingRequests
int[] arr = getRequestedPieces();
boolean found = false;
for (int i = 0; arr[i] >= 0; i++) {
if (arr[i] == r.piece) {
found = true;
break;
}
}
if (!found) {
outstandingRequests.add(r);
if (!choked)
out.sendRequest(r);
lastRequest = r;
return true;
}
}
int nextPiece = listener.wantPiece(peer, bitfield);
if (_log.shouldLog(Log.DEBUG))
_log.debug(peer + " want piece " + nextPiece);
if (nextPiece != -1
&& (lastRequest == null || lastRequest.piece != nextPiece))
{
int piece_length = metainfo.getPieceLength(nextPiece);
//Catch a common place for OOMs esp. on 1MB pieces
byte[] bs;
try {
bs = new byte[piece_length];
} catch (OutOfMemoryError oom) {
_log.warn("Out of memory, can't request piece " + nextPiece, oom);
return false;
}
int length = Math.min(piece_length, PARTSIZE);
Request req = new Request(nextPiece, bs, 0, length);
outstandingRequests.add(req);
if (!choked)
out.sendRequest(req);
lastRequest = req;
return true;
}
}
return false;
}
synchronized void setInteresting(boolean interest)
{
if (_log.shouldLog(Log.DEBUG))
_log.debug(peer + " setInteresting(" + interest + ")");
if (interest != interesting)
{
interesting = interest;
out.sendInterest(interest);
if (interesting && !choked)
request();
}
}
synchronized void setChoking(boolean choke)
{
if (_log.shouldLog(Log.DEBUG))
_log.debug(peer + " setChoking(" + choke + ")");
if (choking != choke)
{
choking = choke;
out.sendChoke(choke);
}
}
void keepAlive()
{
out.sendAlive();
}
synchronized void retransmitRequests()
{
if (interesting && !choked)
out.retransmitRequests(outstandingRequests);
}
}

View File

@ -0,0 +1,42 @@
package org.klomp.snark;
import java.util.Collections;
import java.util.HashSet;
import java.util.Set;
public class Piece implements Comparable {
private int id;
private Set peers;
private boolean requested;
public Piece(int id) {
this.id = id;
this.peers = Collections.synchronizedSet(new HashSet());
this.requested = false;
}
public int compareTo(Object o) throws ClassCastException {
return this.peers.size() - ((Piece)o).peers.size();
}
public boolean equals(Object o) {
if (o == null) return false;
try {
return this.id == ((Piece)o).id;
} catch (ClassCastException cce) {
return false;
}
}
public int getId() { return this.id; }
public Set getPeers() { return this.peers; }
public boolean addPeer(Peer peer) { return this.peers.add(peer.getPeerID()); }
public boolean removePeer(Peer peer) { return this.peers.remove(peer.getPeerID()); }
public boolean isRequested() { return this.requested; }
public void setRequested(boolean requested) { this.requested = requested; }
public String toString() {
return String.valueOf(id);
}
}

View File

@ -0,0 +1,74 @@
/* Request - Holds all information needed for a (partial) piece request.
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark;
/**
* Holds all information needed for a partial piece request.
*/
class Request
{
final int piece;
final byte[] bs;
final int off;
final int len;
long sendTime;
/**
* Creates a new Request.
*
* @param piece Piece number requested.
* @param bs byte array where response should be stored.
* @param off the offset in the array.
* @param len the number of bytes requested.
*/
Request(int piece, byte[] bs, int off, int len)
{
this.piece = piece;
this.bs = bs;
this.off = off;
this.len = len;
// Sanity check
if (piece < 0 || off < 0 || len <= 0 || off + len > bs.length)
throw new IndexOutOfBoundsException("Illegal Request " + toString());
}
public int hashCode()
{
return piece ^ off ^ len;
}
public boolean equals(Object o)
{
if (o instanceof Request)
{
Request req = (Request)o;
return req.piece == piece && req.off == off && req.len == len;
}
return false;
}
public String toString()
{
return "(" + piece + "," + off + "," + len + ")";
}
}

View File

@ -0,0 +1,34 @@
/* ShutdownListener - Callback for end of shutdown sequence
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark;
/**
* Callback for end of shutdown sequence.
*/
interface ShutdownListener
{
/**
* Called when the SnarkShutdown hook has finished shutting down all
* subcomponents.
*/
void shutdown();
}

View File

@ -0,0 +1,808 @@
/* Snark - Main snark program startup class.
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.util.Iterator;
import java.util.List;
import java.util.Properties;
import java.util.Random;
import java.util.StringTokenizer;
import java.util.Timer;
import java.util.TimerTask;
import net.i2p.client.streaming.I2PServerSocket;
import net.i2p.data.Destination;
import net.i2p.util.I2PThread;
import org.klomp.snark.bencode.BDecoder;
/**
* Main Snark program startup class.
*
* @author Mark Wielaard (mark@klomp.org)
*/
public class Snark
implements StorageListener, CoordinatorListener, ShutdownListener
{
private final static int MIN_PORT = 6881;
private final static int MAX_PORT = 6889;
// Error messages (non-fatal)
public final static int ERROR = 1;
// Warning messages
public final static int WARNING = 2;
// Notices (peer level)
public final static int NOTICE = 3;
// Info messages (protocol policy level)
public final static int INFO = 4;
// Debug info (protocol level)
public final static int DEBUG = 5;
// Very low level stuff (network level)
public final static int ALL = 6;
/**
* What level of debug info to show.
*/
//public static int debug = NOTICE;
// Whether or not to ask the user for commands while sharing
private static boolean command_interpreter = true;
private static final String newline = System.getProperty("line.separator");
private static final String copyright =
"The Hunting of the Snark Project - Copyright (C) 2003 Mark J. Wielaard"
+ newline + newline
+ "Snark comes with ABSOLUTELY NO WARRANTY. This is free software, and"
+ newline
+ "you are welcome to redistribute it under certain conditions; read the"
+ newline
+ "COPYING file for details." + newline + newline
+ "This is the I2P port, allowing anonymous bittorrent (http://www.i2p.net/)" + newline
+ "It will not work with normal torrents, so don't even try ;)";
private static final String usage =
"Press return for help. Type \"quit\" and return to stop.";
private static final String help =
"Commands: 'info', 'list', 'quit'.";
// String indicating main activity
String activity = "Not started";
private static class OOMListener implements I2PThread.OOMEventListener {
public void outOfMemory(OutOfMemoryError err) {
try {
err.printStackTrace();
I2PSnarkUtil.instance().debug("OOM in the snark", Snark.ERROR, err);
} catch (Throwable t) {
System.out.println("OOM in the OOM");
}
System.exit(0);
}
}
public static void main(String[] args)
{
System.out.println(copyright);
System.out.println();
if ( (args.length > 0) && ("--config".equals(args[0])) ) {
I2PThread.addOOMEventListener(new OOMListener());
SnarkManager sm = SnarkManager.instance();
if (args.length > 1)
sm.loadConfig(args[1]);
System.out.println("Running in multitorrent mode");
while (true) {
try {
synchronized (sm) {
sm.wait();
}
} catch (InterruptedException ie) {}
}
}
// Parse debug, share/ip and torrent file options.
Snark snark = parseArguments(args);
SnarkShutdown snarkhook
= new SnarkShutdown(snark.storage,
snark.coordinator,
snark.acceptor,
snark.trackerclient,
snark);
//Runtime.getRuntime().addShutdownHook(snarkhook);
Timer timer = new Timer(true);
TimerTask monitor = new PeerMonitorTask(snark.coordinator);
timer.schedule(monitor,
PeerMonitorTask.MONITOR_PERIOD,
PeerMonitorTask.MONITOR_PERIOD);
// Start command interpreter
if (Snark.command_interpreter)
{
boolean quit = false;
System.out.println();
System.out.println(usage);
System.out.println();
try
{
BufferedReader br = new BufferedReader
(new InputStreamReader(System.in));
String line = br.readLine();
while(!quit && line != null)
{
line = line.toLowerCase();
if ("quit".equals(line))
quit = true;
else if ("list".equals(line))
{
synchronized(snark.coordinator.peers)
{
System.out.println(snark.coordinator.peers.size()
+ " peers -"
+ " (i)nterested,"
+ " (I)nteresting,"
+ " (c)hoking,"
+ " (C)hoked:");
Iterator it = snark.coordinator.peers.iterator();
while (it.hasNext())
{
Peer peer = (Peer)it.next();
System.out.println(peer);
System.out.println("\ti: " + peer.isInterested()
+ " I: " + peer.isInteresting()
+ " c: " + peer.isChoking()
+ " C: " + peer.isChoked());
}
}
}
else if ("info".equals(line))
{
System.out.println("Name: " + snark.meta.getName());
System.out.println("Torrent: " + snark.torrent);
System.out.println("Tracker: " + snark.meta.getAnnounce());
List files = snark.meta.getFiles();
System.out.println("Files: "
+ ((files == null) ? 1 : files.size()));
System.out.println("Pieces: " + snark.meta.getPieces());
System.out.println("Piece size: "
+ snark.meta.getPieceLength(0) / 1024
+ " KB");
System.out.println("Total size: "
+ snark.meta.getTotalLength() / (1024 * 1024)
+ " MB");
}
else if ("".equals(line) || "help".equals(line))
{
System.out.println(usage);
System.out.println(help);
}
else
{
System.out.println("Unknown command: " + line);
System.out.println(usage);
}
if (!quit)
{
System.out.println();
line = br.readLine();
}
}
}
catch(IOException ioe)
{
debug("ERROR while reading stdin: " + ioe, ERROR);
}
// Explicit shutdown.
Runtime.getRuntime().removeShutdownHook(snarkhook);
snarkhook.start();
}
}
public String torrent;
public MetaInfo meta;
public Storage storage;
public PeerCoordinator coordinator;
public ConnectionAcceptor acceptor;
public TrackerClient trackerclient;
public String rootDataDir = ".";
public CompleteListener completeListener;
public boolean stopped;
byte[] id;
Snark(String torrent, String ip, int user_port,
StorageListener slistener, CoordinatorListener clistener) {
this(torrent, ip, user_port, slistener, clistener, true, ".");
}
Snark(String torrent, String ip, int user_port,
StorageListener slistener, CoordinatorListener clistener, boolean start, String rootDir)
{
if (slistener == null)
slistener = this;
if (clistener == null)
clistener = this;
this.torrent = torrent;
this.rootDataDir = rootDir;
stopped = true;
activity = "Network setup";
// "Taking Three as the subject to reason about--
// A convenient number to state--
// We add Seven, and Ten, and then multiply out
// By One Thousand diminished by Eight.
//
// "The result we proceed to divide, as you see,
// By Nine Hundred and Ninety Two:
// Then subtract Seventeen, and the answer must be
// Exactly and perfectly true.
// Create a new ID and fill it with something random. First nine
// zeros bytes, then three bytes filled with snark and then
// sixteen random bytes.
byte snark = (((3 + 7 + 10) * (1000 - 8)) / 992) - 17;
id = new byte[20];
Random random = new Random();
int i;
for (i = 0; i < 9; i++)
id[i] = 0;
id[i++] = snark;
id[i++] = snark;
id[i++] = snark;
while (i < 20)
id[i++] = (byte)random.nextInt(256);
Snark.debug("My peer id: " + PeerID.idencode(id), Snark.INFO);
int port;
IOException lastException = null;
/*
* Don't start a tunnel if the torrent isn't going to be started.
* If we are starting,
* startTorrent() will force a connect.
*
boolean ok = I2PSnarkUtil.instance().connect();
if (!ok) fatal("Unable to connect to I2P");
I2PServerSocket serversocket = I2PSnarkUtil.instance().getServerSocket();
if (serversocket == null)
fatal("Unable to listen for I2P connections");
else {
Destination d = serversocket.getManager().getSession().getMyDestination();
debug("Listening on I2P destination " + d.toBase64() + " / " + d.calculateHash().toBase64(), NOTICE);
}
*/
// Figure out what the torrent argument represents.
meta = null;
File f = null;
try
{
InputStream in = null;
f = new File(torrent);
if (f.exists())
in = new FileInputStream(f);
else
{
activity = "Getting torrent";
File torrentFile = I2PSnarkUtil.instance().get(torrent, 3);
if (torrentFile == null) {
fatal("Unable to fetch " + torrent);
if (false) return; // never reached - fatal(..) throws
} else {
torrentFile.deleteOnExit();
in = new FileInputStream(torrentFile);
}
}
meta = new MetaInfo(new BDecoder(in));
}
catch(IOException ioe)
{
// OK, so it wasn't a torrent metainfo file.
if (f != null && f.exists())
if (ip == null)
fatal("'" + torrent + "' exists,"
+ " but is not a valid torrent metainfo file."
+ System.getProperty("line.separator"), ioe);
else
fatal("I2PSnark does not support creating and tracking a torrent at the moment");
/*
{
// Try to create a new metainfo file
Snark.debug
("Trying to create metainfo torrent for '" + torrent + "'",
NOTICE);
try
{
activity = "Creating torrent";
storage = new Storage
(f, "http://" + ip + ":" + port + "/announce", slistener);
storage.create();
meta = storage.getMetaInfo();
}
catch (IOException ioe2)
{
fatal("Could not create torrent for '" + torrent + "'", ioe2);
}
}
*/
else
fatal("Cannot open '" + torrent + "'", ioe);
}
debug(meta.toString(), INFO);
// When the metainfo torrent was created from an existing file/dir
// it already exists.
if (storage == null)
{
try
{
activity = "Checking storage";
storage = new Storage(meta, slistener);
storage.check(rootDataDir);
// have to figure out when to reopen
// if (!start)
// storage.close();
}
catch (IOException ioe)
{
try { storage.close(); } catch (IOException ioee) {
ioee.printStackTrace();
}
fatal("Could not create storage", ioe);
}
}
/*
* see comment above
*
activity = "Collecting pieces";
coordinator = new PeerCoordinator(id, meta, storage, clistener, this);
PeerCoordinatorSet set = PeerCoordinatorSet.instance();
set.add(coordinator);
ConnectionAcceptor acceptor = ConnectionAcceptor.instance();
acceptor.startAccepting(set, serversocket);
trackerclient = new TrackerClient(meta, coordinator);
*/
if (start)
startTorrent();
}
/**
* Start up contacting peers and querying the tracker
*/
public void startTorrent() {
boolean ok = I2PSnarkUtil.instance().connect();
if (!ok) fatal("Unable to connect to I2P");
if (coordinator == null) {
I2PServerSocket serversocket = I2PSnarkUtil.instance().getServerSocket();
if (serversocket == null)
fatal("Unable to listen for I2P connections");
else {
Destination d = serversocket.getManager().getSession().getMyDestination();
debug("Listening on I2P destination " + d.toBase64() + " / " + d.calculateHash().toBase64(), NOTICE);
}
debug("Starting PeerCoordinator, ConnectionAcceptor, and TrackerClient", NOTICE);
activity = "Collecting pieces";
coordinator = new PeerCoordinator(id, meta, storage, this, this);
PeerCoordinatorSet set = PeerCoordinatorSet.instance();
set.add(coordinator);
ConnectionAcceptor acceptor = ConnectionAcceptor.instance();
acceptor.startAccepting(set, serversocket);
trackerclient = new TrackerClient(meta, coordinator);
}
stopped = false;
boolean coordinatorChanged = false;
if (coordinator.halted()) {
// ok, we have already started and stopped, but the coordinator seems a bit annoying to
// restart safely, so lets build a new one to replace the old
PeerCoordinatorSet set = PeerCoordinatorSet.instance();
set.remove(coordinator);
PeerCoordinator newCoord = new PeerCoordinator(coordinator.getID(), coordinator.getMetaInfo(),
coordinator.getStorage(), coordinator.getListener(), this);
set.add(newCoord);
coordinator = newCoord;
coordinatorChanged = true;
}
if (!trackerclient.started() && !coordinatorChanged) {
trackerclient.start();
} else if (trackerclient.halted() || coordinatorChanged) {
try
{
storage.reopen(rootDataDir);
}
catch (IOException ioe)
{
try { storage.close(); } catch (IOException ioee) {
ioee.printStackTrace();
}
fatal("Could not reopen storage", ioe);
}
TrackerClient newClient = new TrackerClient(coordinator.getMetaInfo(), coordinator);
if (!trackerclient.halted())
trackerclient.halt();
trackerclient = newClient;
trackerclient.start();
}
}
/**
* Stop contacting the tracker and talking with peers
*/
public void stopTorrent() {
stopped = true;
TrackerClient tc = trackerclient;
if (tc != null)
tc.halt();
PeerCoordinator pc = coordinator;
if (pc != null)
pc.halt();
Storage st = storage;
if (st != null) {
if (storage.changed)
SnarkManager.instance().saveTorrentStatus(storage.getMetaInfo(), storage.getBitField());
try {
storage.close();
} catch (IOException ioe) {
System.out.println("Error closing " + torrent);
ioe.printStackTrace();
}
}
if (pc != null)
PeerCoordinatorSet.instance().remove(pc);
}
static Snark parseArguments(String[] args)
{
return parseArguments(args, null, null);
}
/**
* Sets debug, ip and torrent variables then creates a Snark
* instance. Calls usage(), which terminates the program, if
* non-valid argument list. The given listeners will be
* passed to all components that take one.
*/
static Snark parseArguments(String[] args,
StorageListener slistener,
CoordinatorListener clistener)
{
int user_port = -1;
String ip = null;
String torrent = null;
boolean configured = I2PSnarkUtil.instance().configured();
int i = 0;
while (i < args.length)
{
/*
if (args[i].equals("--debug"))
{
debug = INFO;
i++;
// Try if there is an level argument.
if (i < args.length)
{
try
{
int level = Integer.parseInt(args[i]);
if (level >= 0)
{
debug = level;
i++;
}
}
catch (NumberFormatException nfe) { }
}
}
else */ if (args[i].equals("--port"))
{
if (args.length - 1 < i + 1)
usage("--port needs port number to listen on");
try
{
user_port = Integer.parseInt(args[i + 1]);
}
catch (NumberFormatException nfe)
{
usage("--port argument must be a number (" + nfe + ")");
}
i += 2;
}
else if (args[i].equals("--no-commands"))
{
command_interpreter = false;
i++;
}
else if (args[i].equals("--eepproxy"))
{
String proxyHost = args[i+1];
String proxyPort = args[i+2];
if (!configured)
I2PSnarkUtil.instance().setProxy(proxyHost, Integer.parseInt(proxyPort));
i += 3;
}
else if (args[i].equals("--i2cp"))
{
String i2cpHost = args[i+1];
String i2cpPort = args[i+2];
Properties opts = null;
if (i+3 < args.length) {
if (!args[i+3].startsWith("--")) {
opts = new Properties();
StringTokenizer tok = new StringTokenizer(args[i+3], " \t");
while (tok.hasMoreTokens()) {
String str = tok.nextToken();
int split = str.indexOf('=');
if (split > 0) {
opts.setProperty(str.substring(0, split), str.substring(split+1));
}
}
}
}
if (!configured)
I2PSnarkUtil.instance().setI2CPConfig(i2cpHost, Integer.parseInt(i2cpPort), opts);
i += 3 + (opts != null ? 1 : 0);
}
else
{
torrent = args[i];
i++;
break;
}
}
if (torrent == null || i != args.length)
if (torrent != null && torrent.startsWith("-"))
usage("Unknow option '" + torrent + "'.");
else
usage("Need exactly one <url>, <file> or <dir>.");
return new Snark(torrent, ip, user_port, slistener, clistener);
}
private static void usage(String s)
{
System.out.println("snark: " + s);
usage();
}
private static void usage()
{
System.out.println
("Usage: snark [--debug [level]] [--no-commands] [--port <port>]");
System.out.println
(" [--eepproxy hostname portnum]");
System.out.println
(" [--i2cp routerHost routerPort ['name=val name=val name=val']]");
System.out.println
(" (<url>|<file>)");
System.out.println
(" --debug\tShows some extra info and stacktraces");
System.out.println
(" level\tHow much debug details to show");
System.out.println
(" \t(defaults to "
+ NOTICE + ", with --debug to "
+ INFO + ", highest level is "
+ ALL + ").");
System.out.println
(" --no-commands\tDon't read interactive commands or show usage info.");
System.out.println
(" --port\tThe port to listen on for incomming connections");
System.out.println
(" \t(if not given defaults to first free port between "
+ MIN_PORT + "-" + MAX_PORT + ").");
System.out.println
(" --share\tStart torrent tracker on <ip> address or <host> name.");
System.out.println
(" --eepproxy\thttp proxy to use (default of 127.0.0.1 port 4444)");
System.out.println
(" --i2cp\tlocation of your I2P router (default of 127.0.0.1 port 7654)");
System.out.println
(" \toptional settings may be included, such as");
System.out.println
(" \tinbound.length=2 outbound.length=2 inbound.lengthVariance=-1 ");
System.out.println
(" <url> \tURL pointing to .torrent metainfo file to download/share.");
System.out.println
(" <file> \tEither a local .torrent metainfo file to download");
System.out.println
(" \tor (with --share) a file to share.");
System.exit(-1);
}
/**
* Aborts program abnormally.
*/
public void fatal(String s)
{
fatal(s, null);
}
/**
* Aborts program abnormally.
*/
public void fatal(String s, Throwable t)
{
I2PSnarkUtil.instance().debug(s, ERROR, t);
//System.err.println("snark: " + s + ((t == null) ? "" : (": " + t)));
//if (debug >= INFO && t != null)
// t.printStackTrace();
stopTorrent();
throw new RuntimeException("die bart die");
}
/**
* Show debug info if debug is true.
*/
public static void debug(String s, int level)
{
I2PSnarkUtil.instance().debug(s, level, null);
//if (debug >= level)
// System.out.println(s);
}
public void peerChange(PeerCoordinator coordinator, Peer peer)
{
// System.out.println(peer.toString());
}
boolean allocating = false;
public void storageCreateFile(Storage storage, String name, long length)
{
//if (allocating)
// System.out.println(); // Done with last file.
//System.out.print("Creating file '" + name
// + "' of length " + length + ": ");
allocating = true;
}
// How much storage space has been allocated
private long allocated = 0;
public void storageAllocated(Storage storage, long length)
{
allocating = true;
//System.out.print(".");
allocated += length;
//if (allocated == meta.getTotalLength())
// System.out.println(); // We have all the disk space we need.
}
boolean allChecked = false;
boolean checking = false;
boolean prechecking = true;
public void storageChecked(Storage storage, int num, boolean checked)
{
allocating = false;
if (!allChecked && !checking)
{
// Use the MetaInfo from the storage since our own might not
// yet be setup correctly.
MetaInfo meta = storage.getMetaInfo();
//if (meta != null)
// System.out.print("Checking existing "
// + meta.getPieces()
// + " pieces: ");
checking = true;
}
if (!checking)
Snark.debug("Got " + (checked ? "" : "BAD ") + "piece: " + num,
Snark.INFO);
}
public void storageAllChecked(Storage storage)
{
//if (checking)
// System.out.println();
allChecked = true;
checking = false;
}
public void storageCompleted(Storage storage)
{
Snark.debug("Completely received " + torrent, Snark.INFO);
//storage.close();
//System.out.println("Completely received: " + torrent);
if (completeListener != null)
completeListener.torrentComplete(this);
}
public void setWantedPieces(Storage storage)
{
coordinator.setWantedPieces();
}
public void shutdown()
{
// Should not be necessary since all non-deamon threads should
// have died. But in reality this does not always happen.
System.exit(0);
}
public interface CompleteListener {
public void torrentComplete(Snark snark);
}
/** Maintain a configurable total uploader cap
*/
final static int MIN_TOTAL_UPLOADERS = 4;
final static int MAX_TOTAL_UPLOADERS = 10;
public static boolean overUploadLimit(int uploaders) {
PeerCoordinatorSet coordinators = PeerCoordinatorSet.instance();
if (coordinators == null || uploaders <= 0)
return false;
int totalUploaders = 0;
for (Iterator iter = coordinators.iterator(); iter.hasNext(); ) {
PeerCoordinator c = (PeerCoordinator)iter.next();
if (!c.halted())
totalUploaders += c.uploaders;
}
int limit = I2PSnarkUtil.instance().getMaxUploaders();
// Snark.debug("Total uploaders: " + totalUploaders + " Limit: " + limit, Snark.DEBUG);
return totalUploaders > limit;
}
public static boolean overUpBWLimit() {
PeerCoordinatorSet coordinators = PeerCoordinatorSet.instance();
if (coordinators == null)
return false;
long total = 0;
for (Iterator iter = coordinators.iterator(); iter.hasNext(); ) {
PeerCoordinator c = (PeerCoordinator)iter.next();
if (!c.halted())
total += c.getCurrentUploadRate();
}
long limit = 1024l * I2PSnarkUtil.instance().getMaxUpBW();
Snark.debug("Total up bw: " + total + " Limit: " + limit, Snark.WARNING);
return total > limit;
}
public static boolean overUpBWLimit(long total) {
long limit = 1024l * I2PSnarkUtil.instance().getMaxUpBW();
return total > limit;
}
}

View File

@ -0,0 +1,742 @@
package org.klomp.snark;
import java.io.File;
import java.io.FileInputStream;
import java.io.FilenameFilter;
import java.io.IOException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.Set;
import java.util.StringTokenizer;
import java.util.TreeMap;
import net.i2p.I2PAppContext;
import net.i2p.data.Base64;
import net.i2p.data.DataHelper;
import net.i2p.router.RouterContext;
import net.i2p.util.I2PThread;
import net.i2p.util.Log;
/**
* Manage multiple snarks
*/
public class SnarkManager implements Snark.CompleteListener {
private static SnarkManager _instance = new SnarkManager();
public static SnarkManager instance() { return _instance; }
/** map of (canonical) filename to Snark instance (unsynchronized) */
private Map _snarks;
private Object _addSnarkLock;
private String _configFile;
private Properties _config;
private I2PAppContext _context;
private Log _log;
private List _messages;
public static final String PROP_I2CP_HOST = "i2psnark.i2cpHost";
public static final String PROP_I2CP_PORT = "i2psnark.i2cpPort";
public static final String PROP_I2CP_OPTS = "i2psnark.i2cpOptions";
public static final String PROP_EEP_HOST = "i2psnark.eepHost";
public static final String PROP_EEP_PORT = "i2psnark.eepPort";
public static final String PROP_UPLOADERS_TOTAL = "i2psnark.uploaders.total";
public static final String PROP_UPBW_MAX = "i2psnark.upbw.max";
public static final String PROP_DIR = "i2psnark.dir";
public static final String PROP_META_PREFIX = "i2psnark.zmeta.";
public static final String PROP_META_BITFIELD_SUFFIX = ".bitfield";
public static final String PROP_AUTO_START = "i2snark.autoStart"; // oops
public static final String DEFAULT_AUTO_START = "false";
public static final String PROP_USE_OPENTRACKERS = "i2psnark.useOpentrackers";
public static final String DEFAULT_USE_OPENTRACKERS = "true";
public static final String PROP_OPENTRACKERS = "i2psnark.opentrackers";
public static final String DEFAULT_OPENTRACKERS = "http://tracker.welterde.i2p/a";
public static final int MIN_UP_BW = 2;
public static final int DEFAULT_MAX_UP_BW = 10;
private SnarkManager() {
_snarks = new HashMap();
_addSnarkLock = new Object();
_context = I2PAppContext.getGlobalContext();
_log = _context.logManager().getLog(SnarkManager.class);
_messages = new ArrayList(16);
loadConfig("i2psnark.config");
int minutes = getStartupDelayMinutes();
_messages.add("Adding torrents in " + minutes + (minutes == 1 ? " minute" : " minutes"));
I2PThread monitor = new I2PThread(new DirMonitor(), "Snark DirMonitor");
monitor.setDaemon(true);
monitor.start();
if (_context instanceof RouterContext)
((RouterContext)_context).router().addShutdownTask(new SnarkManagerShutdown());
}
private static final int MAX_MESSAGES = 5;
public void addMessage(String message) {
synchronized (_messages) {
_messages.add(message);
while (_messages.size() > MAX_MESSAGES)
_messages.remove(0);
}
if (_log.shouldLog(Log.INFO))
_log.info("MSG: " + message);
}
/** newest last */
public List getMessages() {
synchronized (_messages) {
return new ArrayList(_messages);
}
}
public boolean shouldAutoStart() {
return Boolean.valueOf(_config.getProperty(PROP_AUTO_START, DEFAULT_AUTO_START+"")).booleanValue();
}
public boolean shouldUseOpenTrackers() {
return Boolean.valueOf(_config.getProperty(PROP_USE_OPENTRACKERS, DEFAULT_USE_OPENTRACKERS)).booleanValue();
}
private int getStartupDelayMinutes() { return 3; }
public File getDataDir() {
String dir = _config.getProperty(PROP_DIR);
if ( (dir == null) || (dir.trim().length() <= 0) )
dir = "i2psnark";
return new File(dir);
}
public void loadConfig(String filename) {
_configFile = filename;
if (_config == null)
_config = new Properties();
File cfg = new File(filename);
if (cfg.exists()) {
try {
DataHelper.loadProps(_config, cfg);
} catch (IOException ioe) {
_log.error("Error loading I2PSnark config '" + filename + "'", ioe);
}
}
// now add sane defaults
if (!_config.containsKey(PROP_I2CP_HOST))
_config.setProperty(PROP_I2CP_HOST, "localhost");
if (!_config.containsKey(PROP_I2CP_PORT))
_config.setProperty(PROP_I2CP_PORT, "7654");
if (!_config.containsKey(PROP_I2CP_OPTS))
_config.setProperty(PROP_I2CP_OPTS, "inbound.length=1 inbound.lengthVariance=1 outbound.length=1 outbound.lengthVariance=1");
if (!_config.containsKey(PROP_EEP_HOST))
_config.setProperty(PROP_EEP_HOST, "localhost");
if (!_config.containsKey(PROP_EEP_PORT))
_config.setProperty(PROP_EEP_PORT, "4444");
if (!_config.containsKey(PROP_UPLOADERS_TOTAL))
_config.setProperty(PROP_UPLOADERS_TOTAL, "" + Snark.MAX_TOTAL_UPLOADERS);
if (!_config.containsKey(PROP_UPBW_MAX)) {
try {
if (_context instanceof RouterContext)
_config.setProperty(PROP_UPBW_MAX, "" + (((RouterContext)_context).bandwidthLimiter().getOutboundKBytesPerSecond() / 2));
else
_config.setProperty(PROP_UPBW_MAX, "" + DEFAULT_MAX_UP_BW);
} catch (NoClassDefFoundError ncdfe) {
_config.setProperty(PROP_UPBW_MAX, "" + DEFAULT_MAX_UP_BW);
}
}
if (!_config.containsKey(PROP_DIR))
_config.setProperty(PROP_DIR, "i2psnark");
if (!_config.containsKey(PROP_AUTO_START))
_config.setProperty(PROP_AUTO_START, DEFAULT_AUTO_START);
updateConfig();
}
private void updateConfig() {
String i2cpHost = _config.getProperty(PROP_I2CP_HOST);
int i2cpPort = getInt(PROP_I2CP_PORT, 7654);
String opts = _config.getProperty(PROP_I2CP_OPTS);
Map i2cpOpts = new HashMap();
if (opts != null) {
StringTokenizer tok = new StringTokenizer(opts, " ");
while (tok.hasMoreTokens()) {
String pair = tok.nextToken();
int split = pair.indexOf('=');
if (split > 0)
i2cpOpts.put(pair.substring(0, split), pair.substring(split+1));
}
}
if (i2cpHost != null) {
I2PSnarkUtil.instance().setI2CPConfig(i2cpHost, i2cpPort, i2cpOpts);
_log.debug("Configuring with I2CP options " + i2cpOpts);
}
//I2PSnarkUtil.instance().setI2CPConfig("66.111.51.110", 7654, new Properties());
String eepHost = _config.getProperty(PROP_EEP_HOST);
int eepPort = getInt(PROP_EEP_PORT, 4444);
if (eepHost != null)
I2PSnarkUtil.instance().setProxy(eepHost, eepPort);
I2PSnarkUtil.instance().setMaxUploaders(getInt(PROP_UPLOADERS_TOTAL, Snark.MAX_TOTAL_UPLOADERS));
I2PSnarkUtil.instance().setMaxUpBW(getInt(PROP_UPBW_MAX, DEFAULT_MAX_UP_BW));
getDataDir().mkdirs();
}
private int getInt(String prop, int defaultVal) {
String p = _config.getProperty(prop);
try {
if ( (p != null) && (p.trim().length() > 0) )
return Integer.parseInt(p.trim());
} catch (NumberFormatException nfe) {
// ignore
}
return defaultVal;
}
public void updateConfig(String dataDir, boolean autoStart, String seedPct, String eepHost,
String eepPort, String i2cpHost, String i2cpPort, String i2cpOpts,
String upLimit, String upBW, boolean useOpenTrackers, String openTrackers) {
boolean changed = false;
if (eepHost != null) {
int port = I2PSnarkUtil.instance().getEepProxyPort();
try { port = Integer.parseInt(eepPort); } catch (NumberFormatException nfe) {}
String host = I2PSnarkUtil.instance().getEepProxyHost();
if ( (eepHost.trim().length() > 0) && (port > 0) &&
((!host.equals(eepHost) || (port != I2PSnarkUtil.instance().getEepProxyPort()) )) ) {
I2PSnarkUtil.instance().setProxy(eepHost, port);
changed = true;
_config.setProperty(PROP_EEP_HOST, eepHost);
_config.setProperty(PROP_EEP_PORT, eepPort+"");
addMessage("EepProxy location changed to " + eepHost + ":" + port);
}
}
if (upLimit != null) {
int limit = I2PSnarkUtil.instance().getMaxUploaders();
try { limit = Integer.parseInt(upLimit); } catch (NumberFormatException nfe) {}
if ( limit != I2PSnarkUtil.instance().getMaxUploaders()) {
if ( limit >= Snark.MIN_TOTAL_UPLOADERS ) {
I2PSnarkUtil.instance().setMaxUploaders(limit);
changed = true;
_config.setProperty(PROP_UPLOADERS_TOTAL, "" + limit);
addMessage("Total uploaders limit changed to " + limit);
} else {
addMessage("Minimum total uploaders limit is " + Snark.MIN_TOTAL_UPLOADERS);
}
}
}
if (upBW != null) {
int limit = I2PSnarkUtil.instance().getMaxUpBW();
try { limit = Integer.parseInt(upBW); } catch (NumberFormatException nfe) {}
if ( limit != I2PSnarkUtil.instance().getMaxUpBW()) {
if ( limit >= MIN_UP_BW ) {
I2PSnarkUtil.instance().setMaxUpBW(limit);
changed = true;
_config.setProperty(PROP_UPBW_MAX, "" + limit);
addMessage("Up BW limit changed to " + limit + "KBps");
} else {
addMessage("Minimum Up BW limit is " + MIN_UP_BW + "KBps");
}
}
}
if (i2cpHost != null) {
int oldI2CPPort = I2PSnarkUtil.instance().getI2CPPort();
String oldI2CPHost = I2PSnarkUtil.instance().getI2CPHost();
int port = oldI2CPPort;
try { port = Integer.parseInt(i2cpPort); } catch (NumberFormatException nfe) {}
String host = oldI2CPHost;
Map opts = new HashMap();
if (i2cpOpts == null) i2cpOpts = "";
StringTokenizer tok = new StringTokenizer(i2cpOpts, " \t\n");
while (tok.hasMoreTokens()) {
String pair = tok.nextToken();
int split = pair.indexOf('=');
if (split > 0)
opts.put(pair.substring(0, split), pair.substring(split+1));
}
Map oldOpts = new HashMap();
String oldI2CPOpts = _config.getProperty(PROP_I2CP_OPTS);
if (oldI2CPOpts == null) oldI2CPOpts = "";
tok = new StringTokenizer(oldI2CPOpts, " \t\n");
while (tok.hasMoreTokens()) {
String pair = tok.nextToken();
int split = pair.indexOf('=');
if (split > 0)
oldOpts.put(pair.substring(0, split), pair.substring(split+1));
}
if ( (i2cpHost.trim().length() > 0) && (port > 0) &&
((!host.equals(i2cpHost) ||
(port != I2PSnarkUtil.instance().getI2CPPort()) ||
(!oldOpts.equals(opts)))) ) {
boolean snarksActive = false;
Set names = listTorrentFiles();
for (Iterator iter = names.iterator(); iter.hasNext(); ) {
Snark snark = getTorrent((String)iter.next());
if ( (snark != null) && (!snark.stopped) ) {
snarksActive = true;
break;
}
}
if (snarksActive) {
addMessage("Cannot change the I2CP settings while torrents are active");
_log.debug("i2cp host [" + i2cpHost + "] i2cp port " + port + " opts [" + opts
+ "] oldOpts [" + oldOpts + "]");
} else {
if (I2PSnarkUtil.instance().connected()) {
I2PSnarkUtil.instance().disconnect();
addMessage("Disconnecting old I2CP destination");
}
Properties p = new Properties();
p.putAll(opts);
addMessage("I2CP settings changed to " + i2cpHost + ":" + port + " (" + i2cpOpts.trim() + ")");
I2PSnarkUtil.instance().setI2CPConfig(i2cpHost, port, p);
boolean ok = I2PSnarkUtil.instance().connect();
if (!ok) {
addMessage("Unable to connect with the new settings, reverting to the old I2CP settings");
I2PSnarkUtil.instance().setI2CPConfig(oldI2CPHost, oldI2CPPort, oldOpts);
ok = I2PSnarkUtil.instance().connect();
if (!ok)
addMessage("Unable to reconnect with the old settings!");
} else {
addMessage("Reconnected on the new I2CP destination");
_config.setProperty(PROP_I2CP_HOST, i2cpHost.trim());
_config.setProperty(PROP_I2CP_PORT, "" + port);
_config.setProperty(PROP_I2CP_OPTS, i2cpOpts.trim());
changed = true;
// no PeerAcceptors/I2PServerSockets to deal with, since all snarks are inactive
for (Iterator iter = names.iterator(); iter.hasNext(); ) {
String name = (String)iter.next();
Snark snark = getTorrent(name);
if ( (snark != null) && (snark.acceptor != null) ) {
snark.acceptor.restart();
addMessage("I2CP listener restarted for " + snark.meta.getName());
}
}
}
}
changed = true;
}
}
if (shouldAutoStart() != autoStart) {
_config.setProperty(PROP_AUTO_START, autoStart + "");
addMessage("Adjusted autostart to " + autoStart);
changed = true;
}
if (shouldUseOpenTrackers() != useOpenTrackers) {
_config.setProperty(PROP_USE_OPENTRACKERS, useOpenTrackers + "");
addMessage((useOpenTrackers ? "En" : "Dis") + "abled open trackers - torrent restart required to take effect");
changed = true;
}
if (openTrackers != null) {
if (openTrackers.trim().length() > 0 && !openTrackers.trim().equals(getOpenTrackerString())) {
_config.setProperty(PROP_OPENTRACKERS, openTrackers.trim());
addMessage("Open Tracker list changed - torrent restart required to take effect");
changed = true;
}
}
if (changed) {
saveConfig();
} else {
addMessage("Configuration unchanged");
}
}
public void saveConfig() {
try {
synchronized (_configFile) {
DataHelper.storeProps(_config, new File(_configFile));
}
} catch (IOException ioe) {
addMessage("Unable to save the config to '" + _configFile + "'");
}
}
public Properties getConfig() { return _config; }
/** hardcoded for sanity. perhaps this should be customizable, for people who increase their ulimit, etc. */
private static final int MAX_FILES_PER_TORRENT = 128;
/** set of filenames that we are dealing with */
public Set listTorrentFiles() { synchronized (_snarks) { return new HashSet(_snarks.keySet()); } }
/**
* Grab the torrent given the (canonical) filename
*/
public Snark getTorrent(String filename) { synchronized (_snarks) { return (Snark)_snarks.get(filename); } }
public void addTorrent(String filename) { addTorrent(filename, false); }
public void addTorrent(String filename, boolean dontAutoStart) {
if ((!dontAutoStart) && !I2PSnarkUtil.instance().connected()) {
addMessage("Connecting to I2P");
boolean ok = I2PSnarkUtil.instance().connect();
if (!ok) {
addMessage("Error connecting to I2P - check your I2CP settings");
return;
}
}
File sfile = new File(filename);
try {
filename = sfile.getCanonicalPath();
} catch (IOException ioe) {
_log.error("Unable to add the torrent " + filename, ioe);
addMessage("ERR: Could not add the torrent '" + filename + "': " + ioe.getMessage());
return;
}
File dataDir = getDataDir();
Snark torrent = null;
synchronized (_snarks) {
torrent = (Snark)_snarks.get(filename);
}
// don't hold the _snarks lock while verifying the torrent
if (torrent == null) {
synchronized (_addSnarkLock) {
// double-check
synchronized (_snarks) {
if(_snarks.get(filename) != null)
return;
}
FileInputStream fis = null;
try {
fis = new FileInputStream(sfile);
MetaInfo info = new MetaInfo(fis);
fis.close();
fis = null;
String rejectMessage = locked_validateTorrent(info);
if (rejectMessage != null) {
sfile.delete();
addMessage(rejectMessage);
return;
} else {
torrent = new Snark(filename, null, -1, null, null, false, dataDir.getPath());
torrent.completeListener = this;
synchronized (_snarks) {
_snarks.put(filename, torrent);
}
}
} catch (IOException ioe) {
addMessage("Torrent in " + sfile.getName() + " is invalid: " + ioe.getMessage());
if (sfile.exists())
sfile.delete();
return;
} finally {
if (fis != null) try { fis.close(); } catch (IOException ioe) {}
}
}
} else {
return;
}
// ok, snark created, now lets start it up or configure it further
File f = new File(filename);
if (!dontAutoStart && shouldAutoStart()) {
torrent.startTorrent();
addMessage("Torrent added and started: '" + f.getName() + "'");
} else {
addMessage("Torrent added: '" + f.getName() + "'");
}
}
/**
* Get the timestamp for a torrent from the config file
*/
public long getSavedTorrentTime(MetaInfo metainfo) {
byte[] ih = metainfo.getInfoHash();
String infohash = Base64.encode(ih);
infohash = infohash.replace('=', '$');
String time = _config.getProperty(PROP_META_PREFIX + infohash + PROP_META_BITFIELD_SUFFIX);
if (time == null)
return 0;
int comma = time.indexOf(',');
if (comma <= 0)
return 0;
time = time.substring(0, comma);
try { return Long.parseLong(time); } catch (NumberFormatException nfe) {}
return 0;
}
/**
* Get the saved bitfield for a torrent from the config file.
* Convert "." to a full bitfield.
*/
public BitField getSavedTorrentBitField(MetaInfo metainfo) {
byte[] ih = metainfo.getInfoHash();
String infohash = Base64.encode(ih);
infohash = infohash.replace('=', '$');
String bf = _config.getProperty(PROP_META_PREFIX + infohash + PROP_META_BITFIELD_SUFFIX);
if (bf == null)
return null;
int comma = bf.indexOf(',');
if (comma <= 0)
return null;
bf = bf.substring(comma + 1).trim();
int len = metainfo.getPieces();
if (bf.equals(".")) {
BitField bitfield = new BitField(len);
for (int i = 0; i < len; i++)
bitfield.set(i);
return bitfield;
}
byte[] bitfield = Base64.decode(bf);
if (bitfield == null)
return null;
if (bitfield.length * 8 < len)
return null;
return new BitField(bitfield, len);
}
/**
* Save the completion status of a torrent and the current time in the config file
* in the form "i2psnark.zmeta.$base64infohash=$time,$base64bitfield".
* The config file property key is appended with the Base64 of the infohash,
* with the '=' changed to '$' since a key can't contain '='.
* The time is a standard long converted to string.
* The status is either a bitfield converted to Base64 or "." for a completed
* torrent to save space in the config file and in memory.
*/
public void saveTorrentStatus(MetaInfo metainfo, BitField bitfield) {
byte[] ih = metainfo.getInfoHash();
String infohash = Base64.encode(ih);
infohash = infohash.replace('=', '$');
String now = "" + System.currentTimeMillis();
String bfs;
if (bitfield.complete()) {
bfs = ".";
} else {
byte[] bf = bitfield.getFieldBytes();
bfs = Base64.encode(bf);
}
_config.setProperty(PROP_META_PREFIX + infohash + PROP_META_BITFIELD_SUFFIX, now + "," + bfs);
saveConfig();
}
/**
* Remove the status of a torrent from the config file.
* This may help the config file from growing too big.
*/
public void removeTorrentStatus(MetaInfo metainfo) {
byte[] ih = metainfo.getInfoHash();
String infohash = Base64.encode(ih);
infohash = infohash.replace('=', '$');
_config.remove(PROP_META_PREFIX + infohash + PROP_META_BITFIELD_SUFFIX);
saveConfig();
}
private String locked_validateTorrent(MetaInfo info) throws IOException {
String announce = info.getAnnounce();
// basic validation of url
if ((!announce.startsWith("http://")) ||
(announce.indexOf(".i2p/") < 0))
return "Non-i2p tracker in " + info.getName() + ", deleting it";
List files = info.getFiles();
if ( (files != null) && (files.size() > MAX_FILES_PER_TORRENT) ) {
return "Too many files in " + info.getName() + " (" + files.size() + "), deleting it";
} else if (info.getPieces() <= 0) {
return "No pieces in " + info.getName() + "? deleting it";
} else if (info.getPieceLength(0) > 1*1024*1024) {
return "Pieces are too large in " + info.getName() + " (" + info.getPieceLength(0)/1024 + "KB), deleting it";
} else if (info.getTotalLength() > 10*1024*1024*1024l) {
System.out.println("torrent info: " + info.toString());
List lengths = info.getLengths();
if (lengths != null)
for (int i = 0; i < lengths.size(); i++)
System.out.println("File " + i + " is " + lengths.get(i) + " long");
return "Torrents larger than 10GB are not supported yet (because we're paranoid): " + info.getName() + ", deleting it";
} else {
// ok
return null;
}
}
/**
* Stop the torrent, leaving it on the list of torrents unless told to remove it
*/
public Snark stopTorrent(String filename, boolean shouldRemove) {
File sfile = new File(filename);
try {
filename = sfile.getCanonicalPath();
} catch (IOException ioe) {
_log.error("Unable to remove the torrent " + filename, ioe);
addMessage("ERR: Could not remove the torrent '" + filename + "': " + ioe.getMessage());
return null;
}
int remaining = 0;
Snark torrent = null;
synchronized (_snarks) {
if (shouldRemove)
torrent = (Snark)_snarks.remove(filename);
else
torrent = (Snark)_snarks.get(filename);
remaining = _snarks.size();
}
if (torrent != null) {
boolean wasStopped = torrent.stopped;
torrent.stopTorrent();
if (remaining == 0) {
// should we disconnect/reconnect here (taking care to deal with the other thread's
// I2PServerSocket.accept() call properly?)
////I2PSnarkUtil.instance().
}
if (!wasStopped)
addMessage("Torrent stopped: '" + sfile.getName() + "'");
}
return torrent;
}
/**
* Stop the torrent and delete the torrent file itself, but leaving the data
* behind.
*/
public void removeTorrent(String filename) {
Snark torrent = stopTorrent(filename, true);
if (torrent != null) {
File torrentFile = new File(filename);
torrentFile.delete();
if (torrent.storage != null)
removeTorrentStatus(torrent.storage.getMetaInfo());
addMessage("Torrent removed: '" + torrentFile.getName() + "'");
}
}
private class DirMonitor implements Runnable {
public void run() {
try { Thread.sleep(60*1000*getStartupDelayMinutes()); } catch (InterruptedException ie) {}
// the first message was a "We are starting up in 1m"
synchronized (_messages) {
if (_messages.size() == 1)
_messages.remove(0);
}
while (true) {
File dir = getDataDir();
_log.debug("Directory Monitor loop over " + dir.getAbsolutePath());
try {
monitorTorrents(dir);
} catch (Exception e) {
_log.error("Error in the DirectoryMonitor", e);
}
try { Thread.sleep(60*1000); } catch (InterruptedException ie) {}
}
}
}
public void torrentComplete(Snark snark) {
File f = new File(snark.torrent);
long len = snark.meta.getTotalLength();
addMessage("Download complete of " + f.getName()
+ (len < 5*1024*1024 ? " (size: " + (len/1024) + "KB)" : " (size: " + (len/(1024*1024l)) + "MB)"));
}
private void monitorTorrents(File dir) {
String fileNames[] = dir.list(TorrentFilenameFilter.instance());
List foundNames = new ArrayList(0);
if (fileNames != null) {
for (int i = 0; i < fileNames.length; i++) {
try {
foundNames.add(new File(dir, fileNames[i]).getCanonicalPath());
} catch (IOException ioe) {
_log.error("Error resolving '" + fileNames[i] + "' in '" + dir, ioe);
}
}
}
Set existingNames = listTorrentFiles();
// lets find new ones first...
for (int i = 0; i < foundNames.size(); i++) {
if (existingNames.contains(foundNames.get(i))) {
// already known. noop
} else {
if (shouldAutoStart() && !I2PSnarkUtil.instance().connect())
addMessage("Unable to connect to I2P");
addTorrent((String)foundNames.get(i), !shouldAutoStart());
}
}
// now lets see which ones have been removed...
for (Iterator iter = existingNames.iterator(); iter.hasNext(); ) {
String name = (String)iter.next();
if (foundNames.contains(name)) {
// known and still there. noop
} else {
// known, but removed. drop it
stopTorrent(name, true);
}
}
}
private static final String DEFAULT_TRACKERS[] = {
"Postman", "http://YRgrgTLGnbTq2aZOZDJQ~o6Uk5k6TK-OZtx0St9pb0G-5EGYURZioxqYG8AQt~LgyyI~NCj6aYWpPO-150RcEvsfgXLR~CxkkZcVpgt6pns8SRc3Bi-QSAkXpJtloapRGcQfzTtwllokbdC-aMGpeDOjYLd8b5V9Im8wdCHYy7LRFxhEtGb~RL55DA8aYOgEXcTpr6RPPywbV~Qf3q5UK55el6Kex-6VCxreUnPEe4hmTAbqZNR7Fm0hpCiHKGoToRcygafpFqDw5frLXToYiqs9d4liyVB-BcOb0ihORbo0nS3CLmAwZGvdAP8BZ7cIYE3Z9IU9D1G8JCMxWarfKX1pix~6pIA-sp1gKlL1HhYhPMxwyxvuSqx34o3BqU7vdTYwWiLpGM~zU1~j9rHL7x60pVuYaXcFQDR4-QVy26b6Pt6BlAZoFmHhPcAuWfu-SFhjyZYsqzmEmHeYdAwa~HojSbofg0TMUgESRXMw6YThK1KXWeeJVeztGTz25sL8AAAA.i2p/announce.php=http://tracker.postman.i2p/"
// , "eBook", "http://E71FRom6PZNEqTN2Lr8P-sr23b7HJVC32KoGnVQjaX6zJiXwhJy2HsXob36Qmj81TYFZdewFZa9mSJ533UZgGyQkXo2ahctg82JKYZfDe5uDxAn1E9YPjxZCWJaFJh0S~UwSs~9AZ7UcauSJIoNtpxrtbmRNVFLqnkEDdLZi26TeucfOmiFmIWnVblLniWv3tG1boE9Abd-6j3FmYVrRucYuepAILYt6katmVNOk6sXmno1Eynrp~~MBuFq0Ko6~jsc2E2CRVYXDhGHEMdt-j6JUz5D7S2RIVzDRqQyAZLKJ7OdQDmI31przzmne1vOqqqLC~1xUumZVIvF~yOeJUGNjJ1Vx0J8i2BQIusn1pQJ6UCB~ZtZZLQtEb8EPVCfpeRi2ri1M5CyOuxN0V5ekmPHrYIBNevuTCRC26NP7ZS5VDgx1~NaC3A-CzJAE6f1QXi0wMI9aywNG5KGzOPifcsih8eyGyytvgLtrZtV7ykzYpPCS-rDfITncpn5hliPUAAAA.i2p/pub/bt/announce.php=http://de-ebook-archiv.i2p/pub/bt/"
// , "Gaytorrents", "http://uxPWHbK1OIj9HxquaXuhMiIvi21iK0~ZiG9d8G0840ZXIg0r6CbiV71xlsqmdnU6wm0T2LySriM0doW2gUigo-5BNkUquHwOjLROiETnB3ZR0Ml4IGa6QBPn1aAq2d9~g1r1nVjLE~pcFnXB~cNNS7kIhX1d6nLgYVZf0C2cZopEow2iWVUggGGnAA9mHjE86zLEnTvAyhbAMTqDQJhEuLa0ZYSORqzJDMkQt90MV4YMjX1ICY6RfUSFmxEqu0yWTrkHsTtRw48l~dz9wpIgc0a0T9C~eeWvmBFTqlJPtQZwntpNeH~jF7nlYzB58olgV2HHFYpVYD87DYNzTnmNWxCJ5AfDorm6AIUCV2qaE7tZtI1h6fbmGpGlPyW~Kw5GXrRfJwNvr6ajwAVi~bPVnrBwDZezHkfW4slOO8FACPR28EQvaTu9nwhAbqESxV2hCTq6vQSGjuxHeOuzBOEvRWkLKOHWTC09t2DbJ94FSqETmZopTB1ukEmaxRWbKSIaAAAA.i2p/announce.php=http://gaytorrents.i2p/"
, "NickyB", "http://9On6d3cZ27JjwYCtyJJbowe054d5tFnfMjv4PHsYs-EQn4Y4mk2zRixatvuAyXz2MmRfXG-NAUfhKr0KCxRNZbvHmlckYfT-WBzwwpiMAl0wDFY~Pl8cqXuhfikSG5WrqdPfDNNIBuuznS0dqaczf~OyVaoEOpvuP3qV6wKqbSSLpjOwwAaQPHjlRtNIW8-EtUZp-I0LT45HSoowp~6b7zYmpIyoATvIP~sT0g0MTrczWhbVTUZnEkZeLhOR0Duw1-IRXI2KHPbA24wLO9LdpKKUXed05RTz0QklW5ROgR6TYv7aXFufX8kC0-DaKvQ5JKG~h8lcoHvm1RCzNqVE-2aiZnO2xH08H-iCWoLNJE-Td2kT-Tsc~3QdQcnEUcL5BF-VT~QYRld2--9r0gfGl-yDrJZrlrihHGr5J7ImahelNn9PpkVp6eIyABRmJHf2iicrk3CtjeG1j9OgTSwaNmEpUpn4aN7Kx0zNLdH7z6uTgCGD9Kmh1MFYrsoNlTp4AAAA.i2p/bittorrent/announce.php=http://nickyb.i2p/bittorrent/"
// , "Orion", "http://gKik1lMlRmuroXVGTZ~7v4Vez3L3ZSpddrGZBrxVriosCQf7iHu6CIk8t15BKsj~P0JJpxrofeuxtm7SCUAJEr0AIYSYw8XOmp35UfcRPQWyb1LsxUkMT4WqxAT3s1ClIICWlBu5An~q-Mm0VFlrYLIPBWlUFnfPR7jZ9uP5ZMSzTKSMYUWao3ejiykr~mtEmyls6g-ZbgKZawa9II4zjOy-hdxHgP-eXMDseFsrym4Gpxvy~3Fv9TuiSqhpgm~UeTo5YBfxn6~TahKtE~~sdCiSydqmKBhxAQ7uT9lda7xt96SS09OYMsIWxLeQUWhns-C~FjJPp1D~IuTrUpAFcVEGVL-BRMmdWbfOJEcWPZ~CBCQSO~VkuN1ebvIOr9JBerFMZSxZtFl8JwcrjCIBxeKPBmfh~xYh16BJm1BBBmN1fp2DKmZ2jBNkAmnUbjQOqWvUcehrykWk5lZbE7bjJMDFH48v3SXwRuDBiHZmSbsTY6zhGY~GkMQHNGxPMMSIAAAA.i2p/bt/announce.php=http://orion.i2p/bt/"
// , "anonymity", "http://8EoJZIKrWgGuDrxA3nRJs1jsPfiGwmFWL91hBrf0HA7oKhEvAna4Ocx47VLUR9retVEYBAyWFK-eZTPcvhnz9XffBEiJQQ~kFSCqb1fV6IfPiV3HySqi9U5Caf6~hC46fRd~vYnxmaBLICT3N160cxBETqH3v2rdxdJpvYt8q4nMk9LUeVXq7zqCTFLLG5ig1uKgNzBGe58iNcsvTEYlnbYcE930ABmrzj8G1qQSgSwJ6wx3tUQNl1z~4wSOUMan~raZQD60lRK70GISjoX0-D0Po9WmPveN3ES3g72TIET3zc3WPdK2~lgmKGIs8GgNLES1cXTolvbPhdZK1gxddRMbJl6Y6IPFyQ9o4-6Rt3Lp-RMRWZ2TG7j2OMcNSiOmATUhKEFBDfv-~SODDyopGBmfeLw16F4NnYednvn4qP10dyMHcUASU6Zag4mfc2-WivrOqeWhD16fVAh8MoDpIIT~0r9XmwdaVFyLcjbXObabJczxCAW3fodQUnvuSkwzAAAA.i2p/anonymityTracker/announce.php=http://anonymityweb.i2p/anonymityTracker/"
// , "The freak's tracker", "http://mHKva9x24E5Ygfey2llR1KyQHv5f8hhMpDMwJDg1U-hABpJ2NrQJd6azirdfaR0OKt4jDlmP2o4Qx0H598~AteyD~RJU~xcWYdcOE0dmJ2e9Y8-HY51ie0B1yD9FtIV72ZI-V3TzFDcs6nkdX9b81DwrAwwFzx0EfNvK1GLVWl59Ow85muoRTBA1q8SsZImxdyZ-TApTVlMYIQbdI4iQRwU9OmmtefrCe~ZOf4UBS9-KvNIqUL0XeBSqm0OU1jq-D10Ykg6KfqvuPnBYT1BYHFDQJXW5DdPKwcaQE4MtAdSGmj1epDoaEBUa9btQlFsM2l9Cyn1hzxqNWXELmx8dRlomQLlV4b586dRzW~fLlOPIGC13ntPXogvYvHVyEyptXkv890jC7DZNHyxZd5cyrKC36r9huKvhQAmNABT2Y~pOGwVrb~RpPwT0tBuPZ3lHYhBFYmD8y~AOhhNHKMLzea1rfwTvovBMByDdFps54gMN1mX4MbCGT4w70vIopS9yAAAA.i2p/bytemonsoon/announce.php"
, "welterde", "http://BGKmlDOoH3RzFbPRfRpZV2FjpVj8~3moFftw5-dZfDf2070TOe8Tf2~DAVeaM6ZRLdmFEt~9wyFL8YMLMoLoiwGEH6IGW6rc45tstN68KsBDWZqkTohV1q9XFgK9JnCwE~Oi89xLBHsLMTHOabowWM6dkC8nI6QqJC2JODqLPIRfOVrDdkjLwtCrsckzLybNdFmgfoqF05UITDyczPsFVaHtpF1sRggOVmdvCM66otyonlzNcJbn59PA-R808vUrCPMGU~O9Wys0i-NoqtIbtWfOKnjCRFMNw5ex4n9m5Sxm9e20UkpKG6qzEuvKZWi8vTLe1NW~CBrj~vG7I3Ok4wybUFflBFOaBabxYJLlx4xTE1zJIVxlsekmAjckB4v-cQwulFeikR4LxPQ6mCQknW2HZ4JQIq6hL9AMabxjOlYnzh7kjOfRGkck8YgeozcyTvcDUcUsOuSTk06L4kdrv8h2Cozjbloi5zl6KTbj5ZTciKCxi73Pn9grICn-HQqEAAAA.i2p/a=http://tracker.welterde.i2p/stats?mode=top5"
, "mastertracker", "http://VzXD~stRKbL3MOmeTn1iaCQ0CFyTmuFHiKYyo0Rd~dFPZFCYH-22rT8JD7i-C2xzYFa4jT5U2aqHzHI-Jre4HL3Ri5hFtZrLk2ax3ji7Qfb6qPnuYkuiF2E2UDmKUOppI8d9Ye7tjdhQVCy0izn55tBaB-U7UWdcvSK2i85sauyw3G0Gfads1Rvy5-CAe2paqyYATcDmGjpUNLoxbfv9KH1KmwRTNH6k1v4PyWYYnhbT39WfKMbBjSxVQRdi19cyJrULSWhjxaQfJHeWx5Z8Ev4bSPByBeQBFl2~4vqy0S5RypINsRSa3MZdbiAAyn5tr5slWR6QdoqY3qBQgBJFZppy-3iWkFqqKgSxCPundF8gdDLC5ddizl~KYcYKl42y9SGFHIukH-TZs8~em0~iahzsqWVRks3zRG~tlBcX2U3M2~OJs~C33-NKhyfZT7-XFBREvb8Szmd~p66jDxrwOnKaku-G6DyoQipJqIz4VHmY9-y5T8RrUcJcM-5lVoMpAAAA.i2p/announce.php=http://tracker.mastertracker.i2p/"
, "Galen", "http://5jpwQMI5FT303YwKa5Rd38PYSX04pbIKgTaKQsWbqoWjIfoancFdWCShXHLI5G5ofOb0Xu11vl2VEMyPsg1jUFYSVnu4-VfMe3y4TKTR6DTpetWrnmEK6m2UXh91J5DZJAKlgmO7UdsFlBkQfR2rY853-DfbJtQIFl91tbsmjcA5CGQi4VxMFyIkBzv-pCsuLQiZqOwWasTlnzey8GcDAPG1LDcvfflGV~6F5no9mnuisZPteZKlrv~~TDoXTj74QjByWc4EOYlwqK8sbU9aOvz~s31XzErbPTfwiawiaZ0RUI-IDrKgyvmj0neuFTWgjRGVTH8bz7cBZIc3viy6ioD-eMQOrXaQL0TCWZUelRwHRvgdPiQrxdYQs7ixkajeHzxi-Pq0EMm5Vbh3j3Q9kfUFW3JjFDA-MLB4g6XnjCbM5J1rC0oOBDCIEfhQkszru5cyLjHiZ5yeA0VThgu~c7xKHybv~OMXION7V8pBKOgET7ZgAkw1xgYe3Kkyq5syAAAA.i2p/tr/announce.php=http://galen.i2p/tr/"
};
/** comma delimited list of name=announceURL=baseURL for the trackers to be displayed */
public static final String PROP_TRACKERS = "i2psnark.trackers";
private static Map trackerMap = null;
/** sorted map of name to announceURL=baseURL */
public Map getTrackers() {
if (trackerMap != null) // only do this once, can't be updated while running
return trackerMap;
Map rv = new TreeMap();
String trackers = _config.getProperty(PROP_TRACKERS);
if ( (trackers == null) || (trackers.trim().length() <= 0) )
trackers = _context.getProperty(PROP_TRACKERS);
if ( (trackers == null) || (trackers.trim().length() <= 0) ) {
for (int i = 0; i < DEFAULT_TRACKERS.length; i += 2)
rv.put(DEFAULT_TRACKERS[i], DEFAULT_TRACKERS[i+1]);
} else {
StringTokenizer tok = new StringTokenizer(trackers, ",");
while (tok.hasMoreTokens()) {
String pair = tok.nextToken();
int split = pair.indexOf('=');
if (split <= 0)
continue;
String name = pair.substring(0, split).trim();
String url = pair.substring(split+1).trim();
if ( (name.length() > 0) && (url.length() > 0) )
rv.put(name, url);
}
}
trackerMap = rv;
return trackerMap;
}
public String getOpenTrackerString() {
return _config.getProperty(PROP_OPENTRACKERS, DEFAULT_OPENTRACKERS);
}
/** comma delimited list open trackers to use as backups */
/** sorted map of name to announceURL=baseURL */
public List getOpenTrackers() {
if (!shouldUseOpenTrackers())
return null;
List rv = new ArrayList(1);
String trackers = getOpenTrackerString();
StringTokenizer tok = new StringTokenizer(trackers, ", ");
while (tok.hasMoreTokens())
rv.add(tok.nextToken());
if (rv.size() <= 0)
return null;
return rv;
}
private static class TorrentFilenameFilter implements FilenameFilter {
private static final TorrentFilenameFilter _filter = new TorrentFilenameFilter();
public static TorrentFilenameFilter instance() { return _filter; }
public boolean accept(File dir, String name) {
return (name != null) && (name.endsWith(".torrent"));
}
}
public class SnarkManagerShutdown extends I2PThread {
public void run() {
Set names = listTorrentFiles();
for (Iterator iter = names.iterator(); iter.hasNext(); ) {
Snark snark = getTorrent((String)iter.next());
if ( (snark != null) && (!snark.stopped) )
snark.stopTorrent();
}
}
}
}

View File

@ -0,0 +1,92 @@
/* TrackerShutdown - Makes sure everything ends correctly when shutting down.
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark;
import java.io.IOException;
import net.i2p.util.I2PThread;
/**
* Makes sure everything ends correctly when shutting down.
*/
public class SnarkShutdown extends I2PThread
{
private final Storage storage;
private final PeerCoordinator coordinator;
private final ConnectionAcceptor acceptor;
private final TrackerClient trackerclient;
private final ShutdownListener listener;
public SnarkShutdown(Storage storage,
PeerCoordinator coordinator,
ConnectionAcceptor acceptor,
TrackerClient trackerclient,
ShutdownListener listener)
{
this.storage = storage;
this.coordinator = coordinator;
this.acceptor = acceptor;
this.trackerclient = trackerclient;
this.listener = listener;
}
public void run()
{
Snark.debug("Shutting down...", Snark.NOTICE);
Snark.debug("Halting ConnectionAcceptor...", Snark.INFO);
if (acceptor != null)
acceptor.halt();
Snark.debug("Halting TrackerClient...", Snark.INFO);
if (trackerclient != null)
trackerclient.halt();
Snark.debug("Halting PeerCoordinator...", Snark.INFO);
if (coordinator != null)
coordinator.halt();
Snark.debug("Closing Storage...", Snark.INFO);
if (storage != null)
{
try
{
storage.close();
}
catch(IOException ioe)
{
I2PSnarkUtil.instance().debug("Couldn't properly close storage", Snark.ERROR, ioe);
throw new RuntimeException("b0rking");
}
}
// XXX - Should actually wait till done...
try
{
Snark.debug("Waiting 5 seconds...", Snark.INFO);
Thread.sleep(5*1000);
}
catch (InterruptedException ie) { /* ignored */ }
listener.shutdown();
}
}

View File

@ -0,0 +1,43 @@
/* StaticSnark - Main snark startup class for staticly linking with gcj.
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark;
/**
* Main snark startup class for staticly linking with gcj.
* It references somee necessary classes that are normally loaded through
* reflection.
*
* @author Mark Wielaard (mark@klomp.org)
*/
public class StaticSnark
{
public static void main(String[] args)
{
// The GNU security provider is needed for SHA-1 MessageDigest checking.
// So make sure it is available as a security provider.
//Provider gnu = new gnu.java.security.provider.Gnu();
//Security.addProvider(gnu);
// And finally call the normal starting point.
Snark.main(args);
}
}

View File

@ -0,0 +1,707 @@
/* Storage - Class used to store and retrieve pieces.
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark;
import java.io.File;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.StringTokenizer;
import net.i2p.crypto.SHA1;
/**
* Maintains pieces on disk. Can be used to store and retrieve pieces.
*/
public class Storage
{
private MetaInfo metainfo;
private long[] lengths;
private RandomAccessFile[] rafs;
private String[] names;
private final StorageListener listener;
private BitField bitfield; // BitField to represent the pieces
private int needed; // Number of pieces needed
// XXX - Not always set correctly
int piece_size;
int pieces;
boolean changed;
/** The default piece size. */
private static int MIN_PIECE_SIZE = 256*1024;
private static int MAX_PIECE_SIZE = 1024*1024;
/** The maximum number of pieces in a torrent. */
private static long MAX_PIECES = 100*1024/20;
/**
* Creates a new storage based on the supplied MetaInfo. This will
* try to create and/or check all needed files in the MetaInfo.
*
* @exception IOException when creating and/or checking files fails.
*/
public Storage(MetaInfo metainfo, StorageListener listener)
throws IOException
{
this.metainfo = metainfo;
this.listener = listener;
needed = metainfo.getPieces();
bitfield = new BitField(needed);
}
/**
* Creates a storage from the existing file or directory together
* with an appropriate MetaInfo file as can be announced on the
* given announce String location.
*/
public Storage(File baseFile, String announce, StorageListener listener)
throws IOException
{
this.listener = listener;
// Create names, rafs and lengths arrays.
getFiles(baseFile);
long total = 0;
ArrayList lengthsList = new ArrayList();
for (int i = 0; i < lengths.length; i++)
{
long length = lengths[i];
total += length;
lengthsList.add(new Long(length));
}
piece_size = MIN_PIECE_SIZE;
pieces = (int) ((total - 1)/piece_size) + 1;
while (pieces > MAX_PIECES && piece_size < MAX_PIECE_SIZE)
{
piece_size = piece_size*2;
pieces = (int) ((total - 1)/piece_size) +1;
}
// Note that piece_hashes and the bitfield will be filled after
// the MetaInfo is created.
byte[] piece_hashes = new byte[20*pieces];
bitfield = new BitField(pieces);
needed = 0;
List files = new ArrayList();
for (int i = 0; i < names.length; i++)
{
List file = new ArrayList();
StringTokenizer st = new StringTokenizer(names[i], File.separator);
while (st.hasMoreTokens())
{
String part = st.nextToken();
file.add(part);
}
files.add(file);
}
String name = baseFile.getName();
if (files.size() == 1) // FIXME: ...and if base file not a directory or should this be the only check?
// this makes a bad metainfo if the directory has only one file in it
{
files = null;
lengthsList = null;
}
// Note that the piece_hashes are not correctly setup yet.
metainfo = new MetaInfo(announce, baseFile.getName(), null, files,
lengthsList, piece_size, piece_hashes, total);
}
// Creates piece hases for a new storage.
public void create() throws IOException
{
// if (true) {
fast_digestCreate();
// } else {
// orig_digestCreate();
// }
}
/*
private void orig_digestCreate() throws IOException {
// Calculate piece_hashes
MessageDigest digest = null;
try
{
digest = MessageDigest.getInstance("SHA");
}
catch(NoSuchAlgorithmException nsa)
{
throw new InternalError(nsa.toString());
}
byte[] piece_hashes = metainfo.getPieceHashes();
byte[] piece = new byte[piece_size];
for (int i = 0; i < pieces; i++)
{
int length = getUncheckedPiece(i, piece);
digest.update(piece, 0, length);
byte[] hash = digest.digest();
for (int j = 0; j < 20; j++)
piece_hashes[20 * i + j] = hash[j];
bitfield.set(i);
if (listener != null)
listener.storageChecked(this, i, true);
}
if (listener != null)
listener.storageAllChecked(this);
// Reannounce to force recalculating the info_hash.
metainfo = metainfo.reannounce(metainfo.getAnnounce());
}
*/
private void fast_digestCreate() throws IOException {
// Calculate piece_hashes
SHA1 digest = new SHA1();
byte[] piece_hashes = metainfo.getPieceHashes();
byte[] piece = new byte[piece_size];
for (int i = 0; i < pieces; i++)
{
int length = getUncheckedPiece(i, piece);
digest.update(piece, 0, length);
byte[] hash = digest.digest();
for (int j = 0; j < 20; j++)
piece_hashes[20 * i + j] = hash[j];
bitfield.set(i);
if (listener != null)
listener.storageChecked(this, i, true);
}
if (listener != null)
listener.storageAllChecked(this);
// Reannounce to force recalculating the info_hash.
metainfo = metainfo.reannounce(metainfo.getAnnounce());
}
private void getFiles(File base) throws IOException
{
ArrayList files = new ArrayList();
addFiles(files, base);
int size = files.size();
names = new String[size];
lengths = new long[size];
rafs = new RandomAccessFile[size];
int i = 0;
Iterator it = files.iterator();
while (it.hasNext())
{
File f = (File)it.next();
names[i] = f.getPath();
if (base.isDirectory() && names[i].startsWith(base.getPath()))
names[i] = names[i].substring(base.getPath().length() + 1);
lengths[i] = f.length();
rafs[i] = new RandomAccessFile(f, "r");
i++;
}
}
private static void addFiles(List l, File f)
{
if (!f.isDirectory())
l.add(f);
else
{
File[] files = f.listFiles();
if (files == null)
{
Snark.debug("WARNING: Skipping '" + f
+ "' not a normal file.", Snark.WARNING);
return;
}
for (int i = 0; i < files.length; i++)
addFiles(l, files[i]);
}
}
/**
* Returns the MetaInfo associated with this Storage.
*/
public MetaInfo getMetaInfo()
{
return metainfo;
}
/**
* How many pieces are still missing from this storage.
*/
public int needed()
{
return needed;
}
/**
* Whether or not this storage contains all pieces if the MetaInfo.
*/
public boolean complete()
{
return needed == 0;
}
/**
* The BitField that tells which pieces this storage contains.
* Do not change this since this is the current state of the storage.
*/
public BitField getBitField()
{
return bitfield;
}
/**
* Creates (and/or checks) all files from the metainfo file list.
*/
public void check(String rootDir) throws IOException
{
File base = new File(rootDir, filterName(metainfo.getName()));
// look for saved bitfield and timestamp in the config file
long savedTime = SnarkManager.instance().getSavedTorrentTime(metainfo);
BitField savedBitField = SnarkManager.instance().getSavedTorrentBitField(metainfo);
boolean useSavedBitField = savedTime > 0 && savedBitField != null;
List files = metainfo.getFiles();
if (files == null)
{
// Create base as file.
Snark.debug("Creating/Checking file: " + base, Snark.NOTICE);
if (!base.createNewFile() && !base.exists())
throw new IOException("Could not create file " + base);
lengths = new long[1];
rafs = new RandomAccessFile[1];
names = new String[1];
lengths[0] = metainfo.getTotalLength();
if (useSavedBitField) {
long lm = base.lastModified();
if (lm <= 0 || lm > savedTime)
useSavedBitField = false;
}
if (base.exists() && ((useSavedBitField && savedBitField.complete()) || !base.canWrite()))
rafs[0] = new RandomAccessFile(base, "r");
else
rafs[0] = new RandomAccessFile(base, "rw");
names[0] = base.getName();
}
else
{
// Create base as dir.
Snark.debug("Creating/Checking directory: " + base, Snark.NOTICE);
if (!base.mkdir() && !base.isDirectory())
throw new IOException("Could not create directory " + base);
List ls = metainfo.getLengths();
int size = files.size();
long total = 0;
lengths = new long[size];
rafs = new RandomAccessFile[size];
names = new String[size];
for (int i = 0; i < size; i++)
{
File f = createFileFromNames(base, (List)files.get(i));
lengths[i] = ((Long)ls.get(i)).longValue();
total += lengths[i];
if (useSavedBitField) {
long lm = base.lastModified();
if (lm <= 0 || lm > savedTime)
useSavedBitField = false;
}
if (f.exists() && ((useSavedBitField && savedBitField.complete()) || !f.canWrite()))
rafs[i] = new RandomAccessFile(f, "r");
else
rafs[i] = new RandomAccessFile(f, "rw");
names[i] = f.getName();
}
// Sanity check for metainfo file.
long metalength = metainfo.getTotalLength();
if (total != metalength)
throw new IOException("File lengths do not add up "
+ total + " != " + metalength);
}
if (useSavedBitField) {
bitfield = savedBitField;
needed = metainfo.getPieces() - bitfield.count();
Snark.debug("Found saved state and files unchanged, skipping check", Snark.NOTICE);
} else {
checkCreateFiles();
SnarkManager.instance().saveTorrentStatus(metainfo, bitfield);
}
}
/**
* Reopen the file descriptors for a restart
* Do existence check but no length check or data reverification
*/
public void reopen(String rootDir) throws IOException
{
File base = new File(rootDir, filterName(metainfo.getName()));
List files = metainfo.getFiles();
if (files == null)
{
// Reopen base as file.
Snark.debug("Reopening file: " + base, Snark.NOTICE);
if (!base.exists())
throw new IOException("Could not reopen file " + base);
if (complete() || !base.canWrite()) // hope we can get away with this, if we are only seeding...
rafs[0] = new RandomAccessFile(base, "r");
else
rafs[0] = new RandomAccessFile(base, "rw");
}
else
{
// Reopen base as dir.
Snark.debug("Reopening directory: " + base, Snark.NOTICE);
if (!base.isDirectory())
throw new IOException("Could not reopen directory " + base);
int size = files.size();
for (int i = 0; i < size; i++)
{
File f = getFileFromNames(base, (List)files.get(i));
if (!f.exists())
throw new IOException("Could not reopen file " + f);
if (complete() || !f.canWrite()) // see above re: only seeding
rafs[i] = new RandomAccessFile(f, "r");
else
rafs[i] = new RandomAccessFile(f, "rw");
}
}
}
/**
* Removes 'suspicious' characters from the give file name.
*/
private static String filterName(String name)
{
// XXX - Is this enough?
return name.replace(File.separatorChar, '_');
}
private File createFileFromNames(File base, List names) throws IOException
{
File f = null;
Iterator it = names.iterator();
while (it.hasNext())
{
String name = filterName((String)it.next());
if (it.hasNext())
{
// Another dir in the hierarchy.
f = new File(base, name);
if (!f.mkdir() && !f.isDirectory())
throw new IOException("Could not create directory " + f);
base = f;
}
else
{
// The final element (file) in the hierarchy.
f = new File(base, name);
if (!f.createNewFile() && !f.exists())
throw new IOException("Could not create file " + f);
}
}
return f;
}
public static File getFileFromNames(File base, List names)
{
Iterator it = names.iterator();
while (it.hasNext())
{
String name = filterName((String)it.next());
base = new File(base, name);
}
return base;
}
private void checkCreateFiles() throws IOException
{
// Whether we are resuming or not,
// if any of the files already exists we assume we are resuming.
boolean resume = false;
// Make sure all files are available and of correct length
for (int i = 0; i < rafs.length; i++)
{
long length = rafs[i].length();
if(length == lengths[i])
{
if (listener != null)
listener.storageAllocated(this, length);
resume = true; // XXX Could dynamicly check
}
else if (length == 0)
allocateFile(i);
else {
Snark.debug("File '" + names[i] + "' exists, but has wrong length - repairing corruption", Snark.ERROR);
rafs[i].setLength(lengths[i]);
}
}
// Check which pieces match and which don't
if (resume)
{
pieces = metainfo.getPieces();
byte[] piece = new byte[metainfo.getPieceLength(0)];
for (int i = 0; i < pieces; i++)
{
int length = getUncheckedPiece(i, piece);
boolean correctHash = metainfo.checkPiece(i, piece, 0, length);
if (correctHash)
{
bitfield.set(i);
needed--;
}
if (listener != null)
listener.storageChecked(this, i, correctHash);
}
}
if (listener != null) {
listener.storageAllChecked(this);
if (needed <= 0)
listener.storageCompleted(this);
}
}
private void allocateFile(int nr) throws IOException
{
// XXX - Is this the best way to make sure we have enough space for
// the whole file?
listener.storageCreateFile(this, names[nr], lengths[nr]);
final int ZEROBLOCKSIZE = metainfo.getPieceLength(0);
byte[] zeros = new byte[ZEROBLOCKSIZE];
int i;
for (i = 0; i < lengths[nr]/ZEROBLOCKSIZE; i++)
{
rafs[nr].write(zeros);
if (listener != null)
listener.storageAllocated(this, ZEROBLOCKSIZE);
}
int size = (int)(lengths[nr] - i*ZEROBLOCKSIZE);
rafs[nr].write(zeros, 0, size);
if (listener != null)
listener.storageAllocated(this, size);
}
/**
* Closes the Storage and makes sure that all RandomAccessFiles are
* closed. The Storage is unusable after this.
*/
public void close() throws IOException
{
if (rafs == null) return;
for (int i = 0; i < rafs.length; i++)
{
try {
synchronized(rafs[i])
{
rafs[i].close();
}
} catch (IOException ioe) {
I2PSnarkUtil.instance().debug("Error closing " + rafs[i], Snark.ERROR, ioe);
// gobble gobble
}
}
changed = false;
}
/**
* Returns a byte array containing a portion of the requested piece or null if
* the storage doesn't contain the piece yet.
*/
public byte[] getPiece(int piece, int off, int len) throws IOException
{
if (!bitfield.get(piece))
return null;
//Catch a common place for OOMs esp. on 1MB pieces
byte[] bs;
try {
bs = new byte[len];
} catch (OutOfMemoryError oom) {
I2PSnarkUtil.instance().debug("Out of memory, can't honor request for piece " + piece, Snark.WARNING, oom);
return null;
}
getUncheckedPiece(piece, bs, off, len);
return bs;
}
/**
* Put the piece in the Storage if it is correct.
*
* @return true if the piece was correct (sha metainfo hash
* matches), otherwise false.
* @exception IOException when some storage related error occurs.
*/
public boolean putPiece(int piece, byte[] ba) throws IOException
{
// First check if the piece is correct.
// Copy the array first to be paranoid.
byte[] bs = (byte[]) ba.clone();
int length = bs.length;
boolean correctHash = metainfo.checkPiece(piece, bs, 0, length);
if (listener != null)
listener.storageChecked(this, piece, correctHash);
if (!correctHash)
return false;
boolean complete;
synchronized(bitfield)
{
if (bitfield.get(piece))
return true; // No need to store twice.
else
{
bitfield.set(piece);
needed--;
complete = needed == 0;
}
}
// Early typecast, avoid possibly overflowing a temp integer
long start = (long) piece * (long) metainfo.getPieceLength(0);
int i = 0;
long raflen = lengths[i];
while (start > raflen)
{
i++;
start -= raflen;
raflen = lengths[i];
}
int written = 0;
int off = 0;
while (written < length)
{
int need = length - written;
int len = (start + need < raflen) ? need : (int)(raflen - start);
synchronized(rafs[i])
{
rafs[i].seek(start);
rafs[i].write(bs, off + written, len);
}
written += len;
if (need - len > 0)
{
i++;
raflen = lengths[i];
start = 0;
}
}
changed = true;
if (complete) {
// listener.storageCompleted(this);
// do we also need to close all of the files and reopen
// them readonly?
// Do a complete check to be sure.
// Temporarily resets the 'needed' variable and 'bitfield', then call
// checkCreateFiles() which will set 'needed' and 'bitfield'
// and also call listener.storageCompleted() if the double-check
// was successful.
// Todo: set a listener variable so the web shows "checking" and don't
// have the user panic when completed amount goes to zero temporarily?
needed = metainfo.getPieces();
bitfield = new BitField(needed);
checkCreateFiles();
if (needed > 0) {
listener.setWantedPieces(this);
Snark.debug("WARNING: Not really done, missing " + needed
+ " pieces", Snark.WARNING);
} else {
SnarkManager.instance().saveTorrentStatus(metainfo, bitfield);
}
}
return true;
}
private int getUncheckedPiece(int piece, byte[] bs)
throws IOException
{
return getUncheckedPiece(piece, bs, 0, metainfo.getPieceLength(piece));
}
private int getUncheckedPiece(int piece, byte[] bs, int off, int length)
throws IOException
{
// XXX - copy/paste code from putPiece().
// Early typecast, avoid possibly overflowing a temp integer
long start = ((long) piece * (long) metainfo.getPieceLength(0)) + off;
int i = 0;
long raflen = lengths[i];
while (start > raflen)
{
i++;
start -= raflen;
raflen = lengths[i];
}
int read = 0;
while (read < length)
{
int need = length - read;
int len = (start + need < raflen) ? need : (int)(raflen - start);
synchronized(rafs[i])
{
rafs[i].seek(start);
rafs[i].readFully(bs, read, len);
}
read += len;
if (need - len > 0)
{
i++;
raflen = lengths[i];
start = 0;
}
}
return length;
}
}

View File

@ -0,0 +1,65 @@
/* StorageListener.java - Interface used as callback when storage changes.
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark;
/**
* Callback used when Storage changes.
*/
public interface StorageListener
{
/**
* Called when the storage creates a new file of a given length.
*/
void storageCreateFile(Storage storage, String name, long length);
/**
* Called to indicate that length bytes have been allocated.
*/
void storageAllocated(Storage storage, long length);
/**
* Called when storage is being checked and the num piece of that
* total pieces has been checked. When the piece hash matches the
* expected piece hash checked will be true, otherwise it will be
* false.
*/
void storageChecked(Storage storage, int num, boolean checked);
/**
* Called when all pieces in the storage have been checked. Does not
* mean that the storage is complete, just that the state of the
* storage is known.
*/
void storageAllChecked(Storage storage);
/**
* Called the one time when the data is completely received and checked.
*
*/
void storageCompleted(Storage storage);
/** Reset the peer's wanted pieces table
* Call after the storage double-check fails
*
* @param peer the peer
*/
void setWantedPieces(Storage storage);
}

View File

@ -0,0 +1,415 @@
/* TrackerClient - Class that informs a tracker and gets new peers.
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Iterator;
import java.util.List;
import java.util.Random;
import java.util.Set;
import net.i2p.util.I2PThread;
import net.i2p.util.Log;
/**
* Informs metainfo tracker of events and gets new peers for peer
* coordinator.
*
* @author Mark Wielaard (mark@klomp.org)
*/
public class TrackerClient extends I2PThread
{
private static final Log _log = new Log(TrackerClient.class);
private static final String NO_EVENT = "";
private static final String STARTED_EVENT = "started";
private static final String COMPLETED_EVENT = "completed";
private static final String STOPPED_EVENT = "stopped";
private static final String NOT_REGISTERED = "torrent not registered"; //bytemonsoon
private final static int SLEEP = 5; // 5 minutes.
private final static int DELAY_MIN = 2000; // 2 secs.
private final static int DELAY_MUL = 1500; // 1.5 secs.
private final static int MAX_REGISTER_FAILS = 10; // * INITIAL_SLEEP = 15m to register
private final static int INITIAL_SLEEP = 90*1000;
private final static int MAX_CONSEC_FAILS = 5; // slow down after this
private final static int LONG_SLEEP = 30*60*1000; // sleep a while after lots of fails
private final MetaInfo meta;
private final PeerCoordinator coordinator;
private final int port;
private boolean stop;
private boolean started;
private List trackers;
public TrackerClient(MetaInfo meta, PeerCoordinator coordinator)
{
// Set unique name.
super("TrackerClient-" + urlencode(coordinator.getID()));
this.meta = meta;
this.coordinator = coordinator;
this.port = 6881; //(port == -1) ? 9 : port;
stop = false;
started = false;
}
public void start() {
if (stop) throw new RuntimeException("Dont rerun me, create a copy");
super.start();
started = true;
}
public boolean halted() { return stop; }
public boolean started() { return started; }
/**
* Interrupts this Thread to stop it.
*/
public void halt()
{
stop = true;
this.interrupt();
}
private boolean verifyConnected() {
while (!stop && !I2PSnarkUtil.instance().connected()) {
boolean ok = I2PSnarkUtil.instance().connect();
if (!ok) {
try { Thread.sleep(30*1000); } catch (InterruptedException ie) {}
}
}
return !stop && I2PSnarkUtil.instance().connected();
}
public void run()
{
String infoHash = urlencode(meta.getInfoHash());
String peerID = urlencode(coordinator.getID());
_log.debug("Announce: [" + meta.getAnnounce() + "] infoHash: " + infoHash);
// Construct the list of trackers for this torrent,
// starting with the primary one listed in the metainfo,
// followed by the secondary open trackers
// It's painful, but try to make sure if an open tracker is also
// the primary tracker, that we don't add it twice.
trackers = new ArrayList(2);
trackers.add(new Tracker(meta.getAnnounce(), true));
List tlist = SnarkManager.instance().getOpenTrackers();
if (tlist != null) {
for (int i = 0; i < tlist.size(); i++) {
String url = (String)tlist.get(i);
if (!url.startsWith("http://")) {
_log.error("Bad announce URL: [" + url + "]");
continue;
}
int slash = url.indexOf('/', 7);
if (slash <= 7) {
_log.error("Bad announce URL: [" + url + "]");
continue;
}
if (meta.getAnnounce().startsWith(url.substring(0, slash)))
continue;
String dest = I2PSnarkUtil.instance().lookup(url.substring(7, slash));
if (dest == null) {
_log.error("Announce host unknown: [" + url + "]");
continue;
}
if (meta.getAnnounce().startsWith("http://" + dest))
continue;
if (meta.getAnnounce().startsWith("http://i2p/" + dest))
continue;
trackers.add(new Tracker(url, false));
_log.debug("Additional announce: [" + url + "] for infoHash: " + infoHash);
}
}
long uploaded = coordinator.getUploaded();
long downloaded = coordinator.getDownloaded();
long left = coordinator.getLeft();
boolean completed = (left == 0);
int sleptTime = 0;
try
{
if (!verifyConnected()) return;
boolean started = false;
boolean firstTime = true;
int consecutiveFails = 0;
Random r = new Random();
while(!stop)
{
try
{
// Sleep some minutes...
// Sleep the minimum interval for all the trackers, but 60s minimum
// except for the first time...
int delay;
int random = r.nextInt(120*1000);
if (firstTime) {
delay = r.nextInt(30*1000);
firstTime = false;
} else if (completed && started)
delay = 3*SLEEP*60*1000 + random;
else if (coordinator.trackerProblems != null && ++consecutiveFails < MAX_CONSEC_FAILS)
delay = INITIAL_SLEEP;
else
// sleep a while, when we wake up we will contact only the trackers whose intervals have passed
delay = SLEEP*60*1000 + random;
if (delay > 0)
Thread.sleep(delay);
}
catch(InterruptedException interrupt)
{
// ignore
}
if (stop)
break;
if (!verifyConnected()) return;
uploaded = coordinator.getUploaded();
downloaded = coordinator.getDownloaded();
left = coordinator.getLeft();
// First time we got a complete download?
String event;
if (!completed && left == 0)
{
completed = true;
event = COMPLETED_EVENT;
}
else
event = NO_EVENT;
// *** loop once for each tracker
// Only do a request when necessary.
sleptTime = 0;
int maxSeenPeers = 0;
for (Iterator iter = trackers.iterator(); iter.hasNext(); ) {
Tracker tr = (Tracker)iter.next();
if ((!stop) && (!tr.stop) &&
(completed || coordinator.needPeers()) &&
(event == COMPLETED_EVENT || System.currentTimeMillis() > tr.lastRequestTime + tr.interval))
{
try
{
if (!tr.started)
event = STARTED_EVENT;
TrackerInfo info = doRequest(tr, infoHash, peerID,
uploaded, downloaded, left,
event);
coordinator.trackerProblems = null;
tr.trackerProblems = null;
tr.registerFails = 0;
tr.consecutiveFails = 0;
if (tr.isPrimary)
consecutiveFails = 0;
started = true;
tr.started = true;
Set peers = info.getPeers();
tr.seenPeers = peers.size();
if (coordinator.trackerSeenPeers < tr.seenPeers) // update rising number quickly
coordinator.trackerSeenPeers = tr.seenPeers;
if ( (left > 0) && (!completed) ) {
// we only want to talk to new people if we need things
// from them (duh)
List ordered = new ArrayList(peers);
Collections.shuffle(ordered);
Iterator it = ordered.iterator();
while (it.hasNext()) {
Peer cur = (Peer)it.next();
// only delay if we actually make an attempt to add peer
if(coordinator.addPeer(cur)) {
int delay = DELAY_MUL;
delay *= ((int)cur.getPeerID().getAddress().calculateHash().toBase64().charAt(0)) % 10;
delay += DELAY_MIN;
sleptTime += delay;
try { Thread.sleep(delay); } catch (InterruptedException ie) {}
}
}
}
}
catch (IOException ioe)
{
// Probably not fatal (if it doesn't last to long...)
Snark.debug
("WARNING: Could not contact tracker at '"
+ tr.announce + "': " + ioe, Snark.WARNING);
tr.trackerProblems = ioe.getMessage();
// don't show secondary tracker problems to the user
if (tr.isPrimary)
coordinator.trackerProblems = tr.trackerProblems;
if (tr.trackerProblems.toLowerCase().startsWith(NOT_REGISTERED)) {
// Give a guy some time to register it if using opentrackers too
if (trackers.size() == 1) {
stop = true;
coordinator.snark.stopTorrent();
} else { // hopefully each on the opentrackers list is really open
if (tr.registerFails++ > MAX_REGISTER_FAILS)
tr.stop = true;
}
}
if (++tr.consecutiveFails == MAX_CONSEC_FAILS) {
tr.seenPeers = 0;
if (tr.interval < LONG_SLEEP)
tr.interval = LONG_SLEEP; // slow down
}
}
}
if ((!tr.stop) && maxSeenPeers < tr.seenPeers)
maxSeenPeers = tr.seenPeers;
} // *** end of trackers loop here
// we could try and total the unique peers but that's too hard for now
coordinator.trackerSeenPeers = maxSeenPeers;
if (!started)
Snark.debug(" Retrying in one minute...", Snark.DEBUG);
} // *** end of while loop
} // try
catch (Throwable t)
{
I2PSnarkUtil.instance().debug("TrackerClient: " + t, Snark.ERROR, t);
if (t instanceof OutOfMemoryError)
throw (OutOfMemoryError)t;
}
finally
{
try
{
// try to contact everybody we can
// We don't need I2CP connection for eepget
// if (!verifyConnected()) return;
for (Iterator iter = trackers.iterator(); iter.hasNext(); ) {
Tracker tr = (Tracker)iter.next();
if (tr.started && (!tr.stop) && tr.trackerProblems == null)
doRequest(tr, infoHash, peerID, uploaded,
downloaded, left, STOPPED_EVENT);
}
}
catch(IOException ioe) { /* ignored */ }
}
}
private TrackerInfo doRequest(Tracker tr, String infoHash,
String peerID, long uploaded,
long downloaded, long left, String event)
throws IOException
{
String s = tr.announce
+ "?info_hash=" + infoHash
+ "&peer_id=" + peerID
+ "&port=" + port
+ "&ip=" + I2PSnarkUtil.instance().getOurIPString() + ".i2p"
+ "&uploaded=" + uploaded
+ "&downloaded=" + downloaded
+ "&left=" + left
+ ((event != NO_EVENT) ? ("&event=" + event) : "");
Snark.debug("Sending TrackerClient request: " + s, Snark.INFO);
tr.lastRequestTime = System.currentTimeMillis();
File fetched = I2PSnarkUtil.instance().get(s);
if (fetched == null) {
throw new IOException("Error fetching " + s);
}
fetched.deleteOnExit();
InputStream in = null;
try {
in = new FileInputStream(fetched);
TrackerInfo info = new TrackerInfo(in, coordinator.getID(),
coordinator.getMetaInfo());
Snark.debug("TrackerClient response: " + info, Snark.INFO);
String failure = info.getFailureReason();
if (failure != null)
throw new IOException(failure);
tr.interval = info.getInterval() * 1000;
return info;
} finally {
if (in != null) try { in.close(); } catch (IOException ioe) {}
fetched.delete();
}
}
/**
* Very lazy byte[] to URL encoder. Just encodes everything, even
* "normal" chars.
*/
public static String urlencode(byte[] bs)
{
StringBuffer sb = new StringBuffer(bs.length*3);
for (int i = 0; i < bs.length; i++)
{
int c = bs[i] & 0xFF;
sb.append('%');
if (c < 16)
sb.append('0');
sb.append(Integer.toHexString(c));
}
return sb.toString();
}
private class Tracker
{
String announce;
boolean isPrimary;
long interval;
long lastRequestTime;
String trackerProblems;
boolean stop;
boolean started;
int registerFails;
int consecutiveFails;
int seenPeers;
public Tracker(String a, boolean p)
{
announce = a;
isPrimary = p;
interval = INITIAL_SLEEP;
lastRequestTime = 0;
trackerProblems = null;
stop = false;
started = false;
registerFails = 0;
consecutiveFails = 0;
seenPeers = 0;
}
}
}

View File

@ -0,0 +1,136 @@
/* TrackerInfo - Holds information returned by a tracker, mainly the peer list.
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark;
import java.io.IOException;
import java.io.InputStream;
import java.util.HashSet;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.klomp.snark.bencode.BDecoder;
import org.klomp.snark.bencode.BEValue;
import org.klomp.snark.bencode.InvalidBEncodingException;
public class TrackerInfo
{
private final String failure_reason;
private final int interval;
private final Set peers;
public TrackerInfo(InputStream in, byte[] my_id, MetaInfo metainfo)
throws IOException
{
this(new BDecoder(in), my_id, metainfo);
}
public TrackerInfo(BDecoder be, byte[] my_id, MetaInfo metainfo)
throws IOException
{
this(be.bdecodeMap().getMap(), my_id, metainfo);
}
public TrackerInfo(Map m, byte[] my_id, MetaInfo metainfo)
throws IOException
{
BEValue reason = (BEValue)m.get("failure reason");
if (reason != null)
{
failure_reason = reason.getString();
interval = -1;
peers = null;
}
else
{
failure_reason = null;
BEValue beInterval = (BEValue)m.get("interval");
if (beInterval == null)
throw new InvalidBEncodingException("No interval given");
else
interval = beInterval.getInt();
BEValue bePeers = (BEValue)m.get("peers");
if (bePeers == null)
throw new InvalidBEncodingException("No peer list");
else
peers = getPeers(bePeers.getList(), my_id, metainfo);
}
}
public static Set getPeers(InputStream in, byte[] my_id, MetaInfo metainfo)
throws IOException
{
return getPeers(new BDecoder(in), my_id, metainfo);
}
public static Set getPeers(BDecoder be, byte[] my_id, MetaInfo metainfo)
throws IOException
{
return getPeers(be.bdecodeList().getList(), my_id, metainfo);
}
public static Set getPeers(List l, byte[] my_id, MetaInfo metainfo)
throws IOException
{
Set peers = new HashSet(l.size());
Iterator it = l.iterator();
while (it.hasNext())
{
PeerID peerID;
try {
peerID = new PeerID(((BEValue)it.next()).getMap());
} catch (InvalidBEncodingException ibe) {
// don't let one bad entry spoil the whole list
Snark.debug("Discarding peer from list: " + ibe, Snark.ERROR);
continue;
}
peers.add(new Peer(peerID, my_id, metainfo));
}
return peers;
}
public Set getPeers()
{
return peers;
}
public String getFailureReason()
{
return failure_reason;
}
public int getInterval()
{
return interval;
}
public String toString()
{
if (failure_reason != null)
return "TrackerInfo[FAILED: " + failure_reason + "]";
else
return "TrackerInfo[interval=" + interval
+ ", peers=" + peers + "]";
}
}

View File

@ -0,0 +1,351 @@
/* BDecoder - Converts an InputStream to BEValues.
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark.bencode;
import java.io.EOFException;
import java.io.IOException;
import java.io.InputStream;
import java.math.BigInteger;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
/**
* Decodes a bencoded stream to <code>BEValue</code>s.
*
* A bencoded byte stream can represent byte arrays, numbers, lists and
* maps (dictionaries).
*
* It currently contains a hack to indicate a name of a dictionary of
* which a SHA-1 digest hash should be calculated (the hash over the
* original bencoded bytes).
*
* @author Mark Wielaard (mark@klomp.org).
*/
public class BDecoder
{
// The InputStream to BDecode.
private final InputStream in;
// The last indicator read.
// Zero if unknown.
// '0'..'9' indicates a byte[].
// 'i' indicates an Number.
// 'l' indicates a List.
// 'd' indicates a Map.
// 'e' indicates end of Number, List or Map (only used internally).
// -1 indicates end of stream.
// Call getNextIndicator to get the current value (will never return zero).
private int indicator = 0;
// Used for ugly hack to get SHA hash over the metainfo info map
private String special_map = "info";
private boolean in_special_map = false;
private final MessageDigest sha_digest;
// Ugly hack. Return the SHA has over bytes that make up the special map.
public byte[] get_special_map_digest()
{
byte[] result = sha_digest.digest();
return result;
}
// Ugly hack. Name defaults to "info".
public void set_special_map_name(String name)
{
special_map = name;
}
/**
* Initalizes a new BDecoder. Nothing is read from the given
* <code>InputStream</code> yet.
*/
public BDecoder(InputStream in)
{
this.in = in;
// XXX - Used for ugly hack.
try
{
sha_digest = MessageDigest.getInstance("SHA");
}
catch(NoSuchAlgorithmException nsa)
{
throw new InternalError(nsa.toString());
}
}
/**
* Creates a new BDecoder and immediatly decodes the first value it
* sees.
*
* @return The first BEValue on the stream or null when the stream
* has ended.
*
* @exception InvalidBEncoding when the stream doesn't start with a
* bencoded value or the stream isn't a bencoded stream at all.
* @exception IOException when somthing bad happens with the stream
* to read from.
*/
public static BEValue bdecode(InputStream in) throws IOException
{
return new BDecoder(in).bdecode();
}
/**
* Returns what the next bencoded object will be on the stream or -1
* when the end of stream has been reached. Can return something
* unexpected (not '0' .. '9', 'i', 'l' or 'd') when the stream
* isn't bencoded.
*
* This might or might not read one extra byte from the stream.
*/
public int getNextIndicator() throws IOException
{
if (indicator == 0)
{
indicator = in.read();
// XXX - Used for ugly hack
if (in_special_map) sha_digest.update((byte)indicator);
}
return indicator;
}
/**
* Gets the next indicator and returns either null when the stream
* has ended or bdecodes the rest of the stream and returns the
* appropriate BEValue encoded object.
*/
public BEValue bdecode() throws IOException
{
indicator = getNextIndicator();
if (indicator == -1)
return null;
if (indicator >= '0' && indicator <= '9')
return bdecodeBytes();
else if (indicator == 'i')
return bdecodeNumber();
else if (indicator == 'l')
return bdecodeList();
else if (indicator == 'd')
return bdecodeMap();
else
throw new InvalidBEncodingException
("Unknown indicator '" + indicator + "'");
}
/**
* Returns the next bencoded value on the stream and makes sure it
* is a byte array. If it is not a bencoded byte array it will throw
* InvalidBEncodingException.
*/
public BEValue bdecodeBytes() throws IOException
{
int c = getNextIndicator();
int num = c - '0';
if (num < 0 || num > 9)
throw new InvalidBEncodingException("Number expected, not '"
+ (char)c + "'");
indicator = 0;
c = read();
int i = c - '0';
while (i >= 0 && i <= 9)
{
// XXX - This can overflow!
num = num*10 + i;
c = read();
i = c - '0';
}
if (c != ':')
throw new InvalidBEncodingException("Colon expected, not '"
+ (char)c + "'");
return new BEValue(read(num));
}
/**
* Returns the next bencoded value on the stream and makes sure it
* is a number. If it is not a number it will throw
* InvalidBEncodingException.
*/
public BEValue bdecodeNumber() throws IOException
{
int c = getNextIndicator();
if (c != 'i')
throw new InvalidBEncodingException("Expected 'i', not '"
+ (char)c + "'");
indicator = 0;
c = read();
if (c == '0')
{
c = read();
if (c == 'e')
return new BEValue(BigInteger.ZERO);
else
throw new InvalidBEncodingException("'e' expected after zero,"
+ " not '" + (char)c + "'");
}
// XXX - We don't support more the 255 char big integers
char[] chars = new char[256];
int off = 0;
if (c == '-')
{
c = read();
if (c == '0')
throw new InvalidBEncodingException("Negative zero not allowed");
chars[off] = (char)c;
off++;
}
if (c < '1' || c > '9')
throw new InvalidBEncodingException("Invalid Integer start '"
+ (char)c + "'");
chars[off] = (char)c;
off++;
c = read();
int i = c - '0';
while(i >= 0 && i <= 9)
{
chars[off] = (char)c;
off++;
c = read();
i = c - '0';
}
if (c != 'e')
throw new InvalidBEncodingException("Integer should end with 'e'");
String s = new String(chars, 0, off);
return new BEValue(new BigInteger(s));
}
/**
* Returns the next bencoded value on the stream and makes sure it
* is a list. If it is not a list it will throw
* InvalidBEncodingException.
*/
public BEValue bdecodeList() throws IOException
{
int c = getNextIndicator();
if (c != 'l')
throw new InvalidBEncodingException("Expected 'l', not '"
+ (char)c + "'");
indicator = 0;
List result = new ArrayList();
c = getNextIndicator();
while (c != 'e')
{
result.add(bdecode());
c = getNextIndicator();
}
indicator = 0;
return new BEValue(result);
}
/**
* Returns the next bencoded value on the stream and makes sure it
* is a map (dictonary). If it is not a map it will throw
* InvalidBEncodingException.
*/
public BEValue bdecodeMap() throws IOException
{
int c = getNextIndicator();
if (c != 'd')
throw new InvalidBEncodingException("Expected 'd', not '"
+ (char)c + "'");
indicator = 0;
Map result = new HashMap();
c = getNextIndicator();
while (c != 'e')
{
// Dictonary keys are always strings.
String key = bdecode().getString();
// XXX ugly hack
boolean special = special_map.equals(key);
if (special)
in_special_map = true;
BEValue value = bdecode();
result.put(key, value);
// XXX ugly hack continued
if (special)
in_special_map = false;
c = getNextIndicator();
}
indicator = 0;
return new BEValue(result);
}
/**
* Returns the next byte read from the InputStream (as int).
* Throws EOFException if InputStream.read() returned -1.
*/
private int read() throws IOException
{
int c = in.read();
if (c == -1)
throw new EOFException();
if (in_special_map) sha_digest.update((byte)c);
return c;
}
/**
* Returns a byte[] containing length valid bytes starting at offset
* zero. Throws EOFException if InputStream.read() returned -1
* before all requested bytes could be read. Note that the byte[]
* returned might be bigger then requested but will only contain
* length valid bytes. The returned byte[] will be reused when this
* method is called again.
*/
private byte[] read(int length) throws IOException
{
byte[] result = new byte[length];
int read = 0;
while (read < length)
{
int i = in.read(result, read, length - read);
if (i == -1)
throw new EOFException();
read += i;
}
if (in_special_map) sha_digest.update(result, 0, length);
return result;
}
}

View File

@ -0,0 +1,192 @@
/* BEValue - Holds different types that a bencoded byte array can represent.
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark.bencode;
import java.io.UnsupportedEncodingException;
import java.util.List;
import java.util.Map;
/**
* Holds different types that a bencoded byte array can represent.
* You need to call the correct get method to get the correct java
* type object. If the BEValue wasn't actually of the requested type
* you will get a InvalidBEncodingException.
*
* @author Mark Wielaard (mark@klomp.org)
*/
public class BEValue
{
// This is either a byte[], Number, List or Map.
private final Object value;
public BEValue(byte[] value)
{
this.value = value;
}
public BEValue(Number value)
{
this.value = value;
}
public BEValue(List value)
{
this.value = value;
}
public BEValue(Map value)
{
this.value = value;
}
/**
* Returns this BEValue as a String. This operation only succeeds
* when the BEValue is a byte[], otherwise it will throw a
* InvalidBEncodingException. The byte[] will be interpreted as
* UTF-8 encoded characters.
*/
public String getString() throws InvalidBEncodingException
{
try
{
return new String(getBytes(), "UTF-8");
}
catch (ClassCastException cce)
{
throw new InvalidBEncodingException(cce.toString());
}
catch (UnsupportedEncodingException uee)
{
throw new InternalError(uee.toString());
}
}
/**
* Returns this BEValue as a byte[]. This operation only succeeds
* when the BEValue is actually a byte[], otherwise it will throw a
* InvalidBEncodingException.
*/
public byte[] getBytes() throws InvalidBEncodingException
{
try
{
return (byte[])value;
}
catch (ClassCastException cce)
{
throw new InvalidBEncodingException(cce.toString());
}
}
/**
* Returns this BEValue as a Number. This operation only succeeds
* when the BEValue is actually a Number, otherwise it will throw a
* InvalidBEncodingException.
*/
public Number getNumber() throws InvalidBEncodingException
{
try
{
return (Number)value;
}
catch (ClassCastException cce)
{
throw new InvalidBEncodingException(cce.toString());
}
}
/**
* Returns this BEValue as int. This operation only succeeds when
* the BEValue is actually a Number, otherwise it will throw a
* InvalidBEncodingException. The returned int is the result of
* <code>Number.intValue()</code>.
*/
public int getInt() throws InvalidBEncodingException
{
return getNumber().intValue();
}
/**
* Returns this BEValue as long. This operation only succeeds when
* the BEValue is actually a Number, otherwise it will throw a
* InvalidBEncodingException. The returned long is the result of
* <code>Number.longValue()</code>.
*/
public long getLong() throws InvalidBEncodingException
{
return getNumber().longValue();
}
/**
* Returns this BEValue as a List of BEValues. This operation only
* succeeds when the BEValue is actually a List, otherwise it will
* throw a InvalidBEncodingException.
*/
public List getList() throws InvalidBEncodingException
{
try
{
return (List)value;
}
catch (ClassCastException cce)
{
throw new InvalidBEncodingException(cce.toString());
}
}
/**
* Returns this BEValue as a Map of BEValue keys and BEValue
* values. This operation only succeeds when the BEValue is actually
* a Map, otherwise it will throw a InvalidBEncodingException.
*/
public Map getMap() throws InvalidBEncodingException
{
try
{
return (Map)value;
}
catch (ClassCastException cce)
{
throw new InvalidBEncodingException(cce.toString());
}
}
/** return the untyped value */
public Object getValue() { return value; }
public String toString()
{
String valueString;
if (value instanceof byte[])
{
byte[] bs = (byte[])value;
// XXX - Stupid heuristic...
if (bs.length <= 12)
valueString = new String(bs);
else
valueString = "bytes:" + bs.length;
}
else
valueString = value.toString();
return "BEValue[" + valueString + "]";
}
}

View File

@ -0,0 +1,191 @@
/* BDecoder - Converts an InputStream to BEValues.
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark.bencode;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.OutputStream;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Set;
public class BEncoder
{
public static byte[] bencode(Object o) throws IllegalArgumentException
{
try
{
ByteArrayOutputStream baos = new ByteArrayOutputStream();
bencode(o, baos);
return baos.toByteArray();
}
catch (IOException ioe)
{
throw new InternalError(ioe.toString());
}
}
public static void bencode(Object o, OutputStream out)
throws IOException, IllegalArgumentException
{
if (o instanceof String)
bencode((String)o, out);
else if (o instanceof byte[])
bencode((byte[])o, out);
else if (o instanceof Number)
bencode((Number)o, out);
else if (o instanceof List)
bencode((List)o, out);
else if (o instanceof Map)
bencode((Map)o, out);
else if (o instanceof BEValue)
bencode(((BEValue)o).getValue(), out);
else
throw new IllegalArgumentException("Cannot bencode: " + o.getClass());
}
public static byte[] bencode(String s)
{
try
{
ByteArrayOutputStream baos = new ByteArrayOutputStream();
bencode(s, baos);
return baos.toByteArray();
}
catch (IOException ioe)
{
throw new InternalError(ioe.toString());
}
}
public static void bencode(String s, OutputStream out) throws IOException
{
byte[] bs = s.getBytes("UTF-8");
bencode(bs, out);
}
public static byte[] bencode(Number n)
{
try
{
ByteArrayOutputStream baos = new ByteArrayOutputStream();
bencode(n, baos);
return baos.toByteArray();
}
catch (IOException ioe)
{
throw new InternalError(ioe.toString());
}
}
public static void bencode(Number n, OutputStream out) throws IOException
{
out.write('i');
String s = n.toString();
out.write(s.getBytes("UTF-8"));
out.write('e');
}
public static byte[] bencode(List l)
{
try
{
ByteArrayOutputStream baos = new ByteArrayOutputStream();
bencode(l, baos);
return baos.toByteArray();
}
catch (IOException ioe)
{
throw new InternalError(ioe.toString());
}
}
public static void bencode(List l, OutputStream out) throws IOException
{
out.write('l');
Iterator it = l.iterator();
while (it.hasNext())
bencode(it.next(), out);
out.write('e');
}
public static byte[] bencode(byte[] bs)
{
try
{
ByteArrayOutputStream baos = new ByteArrayOutputStream();
bencode(bs, baos);
return baos.toByteArray();
}
catch (IOException ioe)
{
throw new InternalError(ioe.toString());
}
}
public static void bencode(byte[] bs, OutputStream out) throws IOException
{
String l = Integer.toString(bs.length);
out.write(l.getBytes("UTF-8"));
out.write(':');
out.write(bs);
}
public static byte[] bencode(Map m)
{
try
{
ByteArrayOutputStream baos = new ByteArrayOutputStream();
bencode(m, baos);
return baos.toByteArray();
}
catch (IOException ioe)
{
throw new InternalError(ioe.toString());
}
}
public static void bencode(Map m, OutputStream out) throws IOException
{
out.write('d');
// Keys must be sorted. XXX - But is this the correct order?
Set s = m.keySet();
List l = new ArrayList(s);
Collections.sort(l);
Iterator it = l.iterator();
while(it.hasNext())
{
// Keys must be Strings.
String key = (String)it.next();
Object value = m.get(key);
bencode(key, out);
bencode(value, out);
}
out.write('e');
}
}

View File

@ -0,0 +1,36 @@
/* InvalidBEncodingException - Thrown when a bencoded stream is corrupted.
Copyright (C) 2003 Mark J. Wielaard
This file is part of Snark.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
package org.klomp.snark.bencode;
import java.io.IOException;
/**
* Exception thrown when a bencoded stream is corrupted.
*
* @author Mark Wielaard (mark@klomp.org)
*/
public class InvalidBEncodingException extends IOException
{
public InvalidBEncodingException(String message)
{
super(message);
}
}

View File

@ -0,0 +1,916 @@
package org.klomp.snark.web;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.PrintWriter;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.TreeMap;
import java.util.TreeSet;
import javax.servlet.ServletConfig;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import net.i2p.I2PAppContext;
import net.i2p.data.Base64;
import net.i2p.data.DataHelper;
import net.i2p.util.FileUtil;
import net.i2p.util.I2PThread;
import net.i2p.util.Log;
import org.klomp.snark.I2PSnarkUtil;
import org.klomp.snark.MetaInfo;
import org.klomp.snark.Peer;
import org.klomp.snark.Snark;
import org.klomp.snark.SnarkManager;
import org.klomp.snark.Storage;
import org.klomp.snark.TrackerClient;
/**
*
*/
public class I2PSnarkServlet extends HttpServlet {
private I2PAppContext _context;
private Log _log;
private SnarkManager _manager;
private static long _nonce;
public static final String PROP_CONFIG_FILE = "i2psnark.configFile";
public void init(ServletConfig cfg) throws ServletException {
super.init(cfg);
_context = I2PAppContext.getGlobalContext();
_log = _context.logManager().getLog(I2PSnarkServlet.class);
_nonce = _context.random().nextLong();
_manager = SnarkManager.instance();
String configFile = _context.getProperty(PROP_CONFIG_FILE);
if ( (configFile == null) || (configFile.trim().length() <= 0) )
configFile = "i2psnark.config";
_manager.loadConfig(configFile);
}
public void service(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
req.setCharacterEncoding("UTF-8");
resp.setCharacterEncoding("UTF-8");
resp.setContentType("text/html; charset=UTF-8");
long stats[] = {0,0,0,0};
String nonce = req.getParameter("nonce");
if ( (nonce != null) && (nonce.equals(String.valueOf(_nonce))) )
processRequest(req);
String peerParam = req.getParameter("p");
String peerString;
if (peerParam == null) {
peerString = "";
} else {
peerString = "?p=" + peerParam;
}
PrintWriter out = resp.getWriter();
out.write(HEADER_BEGIN);
// we want it to go to the base URI so we don't refresh with some funky action= value
out.write("<meta http-equiv=\"refresh\" content=\"60;" + req.getRequestURI() + peerString + "\">\n");
out.write(HEADER);
out.write("<table border=\"0\" width=\"100%\">\n");
out.write("<tr><td width=\"20%\" class=\"snarkTitle\" valign=\"top\" align=\"left\">");
out.write("I2PSnark<br />\n");
out.write("<table border=\"0\" width=\"100%\">\n");
out.write("<tr><td><a href=\"" + req.getRequestURI() + peerString + "\" class=\"snarkRefresh\">Refresh</a>\n");
out.write("<td><a href=\"http://forum.i2p/viewforum.php?f=21\" class=\"snarkRefresh\">Forum</a>\n");
int count = 0;
Map trackers = _manager.getTrackers();
for (Iterator iter = trackers.keySet().iterator(); iter.hasNext(); ) {
String name = (String)iter.next();
String baseURL = (String)trackers.get(name);
int e = baseURL.indexOf('=');
if (e < 0)
continue;
baseURL = baseURL.substring(e + 1);
if (count++ % 2 == 0)
out.write("<tr>");
out.write("<td><a href=\"" + baseURL + "\" class=\"snarkRefresh\">" + name + "</a>\n");
}
if (count % 2 == 1)
out.write("<td>&nbsp;\n");
out.write("</table>\n");
out.write("</td><td width=\"80%\" class=\"snarkMessages\" valign=\"top\" align=\"left\"><pre>");
List msgs = _manager.getMessages();
for (int i = msgs.size()-1; i >= 0; i--) {
String msg = (String)msgs.get(i);
out.write(msg + "\n");
}
out.write("</pre></td></tr></table>\n");
List snarks = getSortedSnarks(req);
String uri = req.getRequestURI();
out.write(TABLE_HEADER);
if (I2PSnarkUtil.instance().connected() && snarks.size() > 0) {
if (peerParam != null)
out.write("(<a href=\"" + req.getRequestURI() + "\">Hide Peers</a>)<br />\n");
else
out.write("(<a href=\"" + req.getRequestURI() + "?p=1" + "\">Show Peers</a>)<br />\n");
}
out.write(TABLE_HEADER2);
out.write("<th align=\"left\" valign=\"top\">");
if (I2PSnarkUtil.instance().connected())
out.write("<a href=\"" + uri + "?action=StopAll&nonce=" + _nonce +
"\" title=\"Stop all torrents and the i2p tunnel\">Stop All</a>");
else if (snarks.size() > 0)
out.write("<a href=\"" + uri + "?action=StartAll&nonce=" + _nonce +
"\" title=\"Start all torrents and the i2p tunnel\">Start All</a>");
else
out.write("&nbsp;");
out.write("</th></tr></thead>\n");
for (int i = 0; i < snarks.size(); i++) {
Snark snark = (Snark)snarks.get(i);
boolean showDebug = "2".equals(peerParam);
boolean showPeers = showDebug || "1".equals(peerParam) || Base64.encode(snark.meta.getInfoHash()).equals(peerParam);
displaySnark(out, snark, uri, i, stats, showPeers, showDebug);
}
if (snarks.size() <= 0) {
out.write(TABLE_EMPTY);
} else if (snarks.size() > 1) {
out.write(TABLE_TOTAL);
out.write(" <th align=\"right\" valign=\"top\">" + formatSize(stats[0]) + "</th>\n" +
" <th align=\"right\" valign=\"top\">" + formatSize(stats[1]) + "</th>\n" +
" <th align=\"right\" valign=\"top\">" + formatSize(stats[2]) + "ps</th>\n" +
" <th align=\"right\" valign=\"top\">" + formatSize(stats[3]) + "ps</th>\n" +
" <th>&nbsp;</th></tr>\n" +
"</tfoot>\n");
}
out.write(TABLE_FOOTER);
writeAddForm(out, req);
if (true) // seeding needs to register the torrent first, so we can't start it automatically (boo, hiss)
writeSeedForm(out, req);
writeConfigForm(out, req);
out.write(FOOTER);
}
/**
* Do what they ask, adding messages to _manager.addMessage as necessary
*/
private void processRequest(HttpServletRequest req) {
String action = req.getParameter("action");
if (action == null) {
// noop
} else if ("Add torrent".equals(action)) {
String newFile = req.getParameter("newFile");
String newURL = req.getParameter("newURL");
File f = null;
if ( (newFile != null) && (newFile.trim().length() > 0) )
f = new File(newFile.trim());
if ( (f != null) && (!f.exists()) ) {
_manager.addMessage("Torrent file " + newFile +" does not exist");
}
if ( (f != null) && (f.exists()) ) {
File local = new File(_manager.getDataDir(), f.getName());
String canonical = null;
try {
canonical = local.getCanonicalPath();
if (local.exists()) {
if (_manager.getTorrent(canonical) != null)
_manager.addMessage("Torrent already running: " + newFile);
else
_manager.addMessage("Torrent already in the queue: " + newFile);
} else {
boolean ok = FileUtil.copy(f.getAbsolutePath(), local.getAbsolutePath(), true);
if (ok) {
_manager.addMessage("Copying torrent to " + local.getAbsolutePath());
_manager.addTorrent(canonical);
} else {
_manager.addMessage("Unable to copy the torrent to " + local.getAbsolutePath() + " from " + f.getAbsolutePath());
}
}
} catch (IOException ioe) {
_log.warn("hrm: " + local, ioe);
}
} else if ( (newURL != null) && (newURL.trim().length() > "http://.i2p/".length()) ) {
_manager.addMessage("Fetching " + newURL);
I2PThread fetch = new I2PThread(new FetchAndAdd(_manager, newURL), "Fetch and add");
fetch.start();
} else {
// no file or URL specified
}
} else if ("Stop".equals(action)) {
String torrent = req.getParameter("torrent");
if (torrent != null) {
byte infoHash[] = Base64.decode(torrent);
if ( (infoHash != null) && (infoHash.length == 20) ) { // valid sha1
for (Iterator iter = _manager.listTorrentFiles().iterator(); iter.hasNext(); ) {
String name = (String)iter.next();
Snark snark = _manager.getTorrent(name);
if ( (snark != null) && (DataHelper.eq(infoHash, snark.meta.getInfoHash())) ) {
_manager.stopTorrent(name, false);
break;
}
}
}
}
} else if ("Start".equals(action)) {
String torrent = req.getParameter("torrent");
if (torrent != null) {
byte infoHash[] = Base64.decode(torrent);
if ( (infoHash != null) && (infoHash.length == 20) ) { // valid sha1
for (Iterator iter = _manager.listTorrentFiles().iterator(); iter.hasNext(); ) {
String name = (String)iter.next();
Snark snark = _manager.getTorrent(name);
if ( (snark != null) && (DataHelper.eq(infoHash, snark.meta.getInfoHash())) ) {
snark.startTorrent();
_manager.addMessage("Starting up torrent " + name);
break;
}
}
}
}
} else if ("Remove".equals(action)) {
String torrent = req.getParameter("torrent");
if (torrent != null) {
byte infoHash[] = Base64.decode(torrent);
if ( (infoHash != null) && (infoHash.length == 20) ) { // valid sha1
for (Iterator iter = _manager.listTorrentFiles().iterator(); iter.hasNext(); ) {
String name = (String)iter.next();
Snark snark = _manager.getTorrent(name);
if ( (snark != null) && (DataHelper.eq(infoHash, snark.meta.getInfoHash())) ) {
_manager.stopTorrent(name, true);
// should we delete the torrent file?
// yeah, need to, otherwise it'll get autoadded again (at the moment
File f = new File(name);
f.delete();
_manager.addMessage("Torrent file deleted: " + f.getAbsolutePath());
break;
}
}
}
}
} else if ("Delete".equals(action)) {
String torrent = req.getParameter("torrent");
if (torrent != null) {
byte infoHash[] = Base64.decode(torrent);
if ( (infoHash != null) && (infoHash.length == 20) ) { // valid sha1
for (Iterator iter = _manager.listTorrentFiles().iterator(); iter.hasNext(); ) {
String name = (String)iter.next();
Snark snark = _manager.getTorrent(name);
if ( (snark != null) && (DataHelper.eq(infoHash, snark.meta.getInfoHash())) ) {
_manager.stopTorrent(name, true);
File f = new File(name);
f.delete();
_manager.addMessage("Torrent file deleted: " + f.getAbsolutePath());
List files = snark.meta.getFiles();
String dataFile = snark.meta.getName();
f = new File(_manager.getDataDir(), dataFile);
if (files == null) { // single file torrent
if (f.delete())
_manager.addMessage("Data file deleted: " + f.getAbsolutePath());
else
_manager.addMessage("Data file could not be deleted: " + f.getAbsolutePath());
break;
}
for (int i = 0; i < files.size(); i++) { // pass 1 delete files
// multifile torrents have the getFiles() return lists of lists of filenames, but
// each of those lists just contain a single file afaict...
File df = Storage.getFileFromNames(f, (List) files.get(i));
if (df.delete())
_manager.addMessage("Data file deleted: " + df.getAbsolutePath());
else
_manager.addMessage("Data file could not be deleted: " + df.getAbsolutePath());
}
for (int i = files.size() - 1; i >= 0; i--) { // pass 2 delete dirs - not foolproof,
// we could sort and do a strict bottom-up
File df = Storage.getFileFromNames(f, (List) files.get(i));
df = df.getParentFile();
if (df == null || !df.exists())
continue;
if(df.delete())
_manager.addMessage("Data dir deleted: " + df.getAbsolutePath());
}
break;
}
}
}
}
} else if ("Save configuration".equals(action)) {
String dataDir = req.getParameter("dataDir");
boolean autoStart = req.getParameter("autoStart") != null;
String seedPct = req.getParameter("seedPct");
String eepHost = req.getParameter("eepHost");
String eepPort = req.getParameter("eepPort");
String i2cpHost = req.getParameter("i2cpHost");
String i2cpPort = req.getParameter("i2cpPort");
String i2cpOpts = req.getParameter("i2cpOpts");
String upLimit = req.getParameter("upLimit");
String upBW = req.getParameter("upBW");
boolean useOpenTrackers = req.getParameter("useOpenTrackers") != null;
String openTrackers = req.getParameter("openTrackers");
_manager.updateConfig(dataDir, autoStart, seedPct, eepHost, eepPort, i2cpHost, i2cpPort, i2cpOpts, upLimit, upBW, useOpenTrackers, openTrackers);
} else if ("Create torrent".equals(action)) {
String baseData = req.getParameter("baseFile");
if (baseData != null) {
File baseFile = new File(_manager.getDataDir(), baseData);
String announceURL = req.getParameter("announceURL");
String announceURLOther = req.getParameter("announceURLOther");
if ( (announceURLOther != null) && (announceURLOther.trim().length() > "http://.i2p/announce".length()) )
announceURL = announceURLOther;
if (announceURL == null || announceURL.length() <= 0)
_manager.addMessage("Error creating torrent - you must select a tracker");
else if (baseFile.exists()) {
try {
Storage s = new Storage(baseFile, announceURL, null);
s.create();
s.close(); // close the files... maybe need a way to pass this Storage to addTorrent rather than starting over
MetaInfo info = s.getMetaInfo();
File torrentFile = new File(baseFile.getParent(), baseFile.getName() + ".torrent");
if (torrentFile.exists())
throw new IOException("Cannot overwrite an existing .torrent file: " + torrentFile.getPath());
_manager.saveTorrentStatus(info, s.getBitField()); // so addTorrent won't recheck
// DirMonitor could grab this first, maybe hold _snarks lock?
FileOutputStream out = new FileOutputStream(torrentFile);
out.write(info.getTorrentData());
out.close();
_manager.addMessage("Torrent created for " + baseFile.getName() + ": " + torrentFile.getAbsolutePath());
// now fire it up, but don't automatically seed it
_manager.addTorrent(torrentFile.getCanonicalPath(), true);
_manager.addMessage("Many I2P trackers require you to register new torrents before seeding - please do so before starting " + baseFile.getName());
} catch (IOException ioe) {
_manager.addMessage("Error creating a torrent for " + baseFile.getAbsolutePath() + ": " + ioe.getMessage());
}
} else {
_manager.addMessage("Cannot create a torrent for the nonexistent data: " + baseFile.getAbsolutePath());
}
}
} else if ("StopAll".equals(action)) {
_manager.addMessage("Stopping all torrents and closing the I2P tunnel");
List snarks = getSortedSnarks(req);
for (int i = 0; i < snarks.size(); i++) {
Snark snark = (Snark)snarks.get(i);
if (!snark.stopped)
_manager.stopTorrent(snark.torrent, false);
}
if (I2PSnarkUtil.instance().connected()) {
I2PSnarkUtil.instance().disconnect();
_manager.addMessage("I2P tunnel closed");
}
} else if ("StartAll".equals(action)) {
_manager.addMessage("Opening the I2P tunnel and starting all torrents");
List snarks = getSortedSnarks(req);
for (int i = 0; i < snarks.size(); i++) {
Snark snark = (Snark)snarks.get(i);
if (snark.stopped)
snark.startTorrent();
}
}
}
private List getSortedSnarks(HttpServletRequest req) {
Set files = _manager.listTorrentFiles();
TreeSet fileNames = new TreeSet(files); // sorts it alphabetically
ArrayList rv = new ArrayList(fileNames.size());
for (Iterator iter = fileNames.iterator(); iter.hasNext(); ) {
String name = (String)iter.next();
Snark snark = _manager.getTorrent(name);
if (snark != null)
rv.add(snark);
}
return rv;
}
private static final int MAX_DISPLAYED_FILENAME_LENGTH = 60;
private static final int MAX_DISPLAYED_ERROR_LENGTH = 40;
private void displaySnark(PrintWriter out, Snark snark, String uri, int row, long stats[], boolean showPeers, boolean showDebug) throws IOException {
String filename = snark.torrent;
File f = new File(filename);
filename = f.getName(); // the torrent may be the canonical name, so lets just grab the local name
int i = filename.lastIndexOf(".torrent");
if (i > 0)
filename = filename.substring(0, i);
if (filename.length() > MAX_DISPLAYED_FILENAME_LENGTH)
filename = filename.substring(0, MAX_DISPLAYED_FILENAME_LENGTH) + "...";
long total = snark.meta.getTotalLength();
// Early typecast, avoid possibly overflowing a temp integer
long remaining = (long) snark.storage.needed() * (long) snark.meta.getPieceLength(0);
if (remaining > total)
remaining = total;
long downBps = 0;
long upBps = 0;
if (snark.coordinator != null) {
downBps = snark.coordinator.getDownloadRate();
upBps = snark.coordinator.getUploadRate();
}
long remainingSeconds;
if (downBps > 0)
remainingSeconds = remaining / downBps;
else
remainingSeconds = -1;
boolean isRunning = !snark.stopped;
long uploaded = 0;
if (snark.coordinator != null) {
uploaded = snark.coordinator.getUploaded();
stats[0] += snark.coordinator.getDownloaded();
}
stats[1] += uploaded;
if (isRunning) {
stats[2] += downBps;
stats[3] += upBps;
}
boolean isValid = snark.meta != null;
boolean singleFile = snark.meta.getFiles() == null;
String err = null;
int curPeers = 0;
int knownPeers = 0;
if (snark.coordinator != null) {
err = snark.coordinator.trackerProblems;
curPeers = snark.coordinator.getPeerCount();
knownPeers = snark.coordinator.trackerSeenPeers;
}
String statusString = "Unknown";
if (err != null) {
if (isRunning && curPeers > 0 && !showPeers)
statusString = "<a title=\"" + err + "\">TrackerErr</a> (" +
curPeers + "/" + knownPeers +
" <a href=\"" + uri + "?p=" + Base64.encode(snark.meta.getInfoHash()) + "\">peers</a>)";
else if (isRunning)
statusString = "<a title=\"" + err + "\">TrackerErr (" + curPeers + "/" + knownPeers + " peers)";
else {
if (err.length() > MAX_DISPLAYED_ERROR_LENGTH)
err = err.substring(0, MAX_DISPLAYED_ERROR_LENGTH) + "...";
statusString = "TrackerErr<br />(" + err + ")";
}
} else if (remaining <= 0) {
if (isRunning && curPeers > 0 && !showPeers)
statusString = "Seeding (" +
curPeers + "/" + knownPeers +
" <a href=\"" + uri + "?p=" + Base64.encode(snark.meta.getInfoHash()) + "\">peers</a>)";
else if (isRunning)
statusString = "Seeding (" + curPeers + "/" + knownPeers + " peers)";
else
statusString = "Complete";
} else {
if (isRunning && curPeers > 0 && downBps > 0 && !showPeers)
statusString = "OK (" +
curPeers + "/" + knownPeers +
" <a href=\"" + uri + "?p=" + Base64.encode(snark.meta.getInfoHash()) + "\">peers</a>)";
else if (isRunning && curPeers > 0 && downBps > 0)
statusString = "OK (" + curPeers + "/" + knownPeers + " peers)";
else if (isRunning && curPeers > 0 && !showPeers)
statusString = "Stalled (" +
curPeers + "/" + knownPeers +
" <a href=\"" + uri + "?p=" + Base64.encode(snark.meta.getInfoHash()) + "\">peers</a>)";
else if (isRunning && curPeers > 0)
statusString = "Stalled (" + curPeers + "/" + knownPeers + " peers)";
else if (isRunning)
statusString = "No Peers (0/" + knownPeers + ")";
else
statusString = "Stopped";
}
String rowClass = (row % 2 == 0 ? "snarkTorrentEven" : "snarkTorrentOdd");
out.write("<tr class=\"" + rowClass + "\">");
out.write("<td valign=\"top\" align=\"left\" class=\"snarkTorrentStatus " + rowClass + "\">");
out.write(statusString + "</td>\n\t");
out.write("<td valign=\"top\" align=\"left\" class=\"snarkTorrentName " + rowClass + "\">");
if (remaining == 0)
out.write("<a href=\"file:///" + _manager.getDataDir().getAbsolutePath() + File.separatorChar + snark.meta.getName()
+ "\" title=\"Download the completed file\">");
out.write(filename);
if (remaining == 0)
out.write("</a>");
// temporarily hardcoded for postman and anonymity, requires bytemonsoon patch for lookup by info_hash
String announce = snark.meta.getAnnounce();
if (announce.startsWith("http://YRgrgTLG") || announce.startsWith("http://8EoJZIKr")) {
Map trackers = _manager.getTrackers();
for (Iterator iter = trackers.keySet().iterator(); iter.hasNext(); ) {
String name = (String)iter.next();
String baseURL = (String)trackers.get(name);
if (!baseURL.startsWith(announce))
continue;
int e = baseURL.indexOf('=');
if (e < 0)
continue;
baseURL = baseURL.substring(e + 1);
out.write("&nbsp;&nbsp;&nbsp;(<a href=\"" + baseURL + "details.php?dllist=1&filelist=1&info_hash=");
out.write(TrackerClient.urlencode(snark.meta.getInfoHash()));
out.write("\" title=\"" + name + " Tracker\">Details</a>)");
break;
}
}
out.write("</td>\n\t");
out.write("<td valign=\"top\" align=\"right\" class=\"snarkTorrentETA " + rowClass + "\">");
if(isRunning && remainingSeconds > 0)
out.write(DataHelper.formatDuration(remainingSeconds*1000)); // (eta 6h)
out.write("</td>\n\t");
out.write("<td valign=\"top\" align=\"right\" class=\"snarkTorrentDownloaded " + rowClass + "\">");
if (remaining > 0)
out.write(formatSize(total-remaining) + "/" + formatSize(total)); // 18MB/3GB
else
out.write(formatSize(total)); // 3GB
out.write("</td>\n\t");
out.write("<td valign=\"top\" align=\"right\" class=\"snarkTorrentUploaded " + rowClass
+ "\">" + formatSize(uploaded) + "</td>\n\t");
out.write("<td valign=\"top\" align=\"right\" class=\"snarkTorrentRate\">");
if(isRunning && remaining > 0)
out.write(formatSize(downBps) + "ps");
out.write("</td>\n\t");
out.write("<td valign=\"top\" align=\"right\" class=\"snarkTorrentRate\">");
if(isRunning)
out.write(formatSize(upBps) + "ps");
out.write("</td>\n\t");
out.write("<td valign=\"top\" align=\"left\" class=\"snarkTorrentAction " + rowClass + "\">");
String parameters = "&nonce=" + _nonce + "&torrent=" + Base64.encode(snark.meta.getInfoHash());
if (showPeers)
parameters = parameters + "&p=1";
if (isRunning) {
out.write("<a href=\"" + uri + "?action=Stop" + parameters
+ "\" title=\"Stop the torrent\">Stop</a>");
} else {
if (isValid)
out.write("<a href=\"" + uri + "?action=Start" + parameters
+ "\" title=\"Start the torrent\">Start</a> ");
out.write("<a href=\"" + uri + "?action=Remove" + parameters
+ "\" title=\"Remove the torrent from the active list, deleting the .torrent file\">Remove</a><br />");
out.write("<a href=\"" + uri + "?action=Delete" + parameters
+ "\" title=\"Delete the .torrent file and the associated data file(s)\">Delete</a> ");
}
out.write("</td>\n</tr>\n");
if(showPeers && isRunning && curPeers > 0) {
List peers = snark.coordinator.peerList();
Iterator it = peers.iterator();
while (it.hasNext()) {
Peer peer = (Peer)it.next();
if (!peer.isConnected())
continue;
out.write("<tr class=\"" + rowClass + "\">");
out.write("<td class=\"snarkTorrentStatus " + rowClass + "\">");
out.write("</td>\n\t");
out.write("<td valign=\"top\" align=\"right\" class=\"snarkTorrentStatus " + rowClass + "\">");
String ch = peer.toString().substring(0, 4);
String client;
if ("AwMD".equals(ch))
client = "I2PSnark";
else if ("BFJT".equals(ch))
client = "I2PRufus";
else if ("TTMt".equals(ch))
client = "I2P-BT";
else if ("LUFa".equals(ch))
client = "Azureus";
else
client = "Unknown";
out.write("<font size=-1>" + client + "</font>&nbsp;&nbsp;<tt>" + peer.toString().substring(5, 9) + "</tt>");
if (showDebug)
out.write(" inactive " + (peer.getInactiveTime() / 1000) + "s");
out.write("</td>\n\t");
out.write("<td class=\"snarkTorrentStatus " + rowClass + "\">");
out.write("</td>\n\t");
out.write("<td valign=\"top\" align=\"right\" class=\"snarkTorrentStatus " + rowClass + "\">");
float pct = (float) (100.0 * (float) peer.completed() / snark.meta.getPieces());
if (pct == 100.0)
out.write("<font size=-1>Seed</font>");
else {
String ps = String.valueOf(pct);
if (ps.length() > 5)
ps = ps.substring(0, 5);
out.write("<font size=-1>" + ps + "%</font>");
}
out.write("</td>\n\t");
out.write("<td class=\"snarkTorrentStatus " + rowClass + "\">");
out.write("</td>\n\t");
out.write("<td valign=\"top\" align=\"right\" class=\"snarkTorrentStatus " + rowClass + "\">");
if (remaining > 0) {
if (peer.isInteresting() && !peer.isChoked()) {
out.write("<font color=#008000>");
out.write("<font size=-1>" + formatSize(peer.getDownloadRate()) + "ps</font></font>");
} else {
out.write("<font color=#a00000><font size=-1><a title=\"");
if (!peer.isInteresting())
out.write("Uninteresting\">");
else
out.write("Choked\">");
out.write(formatSize(peer.getDownloadRate()) + "ps</a></font></font>");
}
}
out.write("</td>\n\t");
out.write("<td valign=\"top\" align=\"right\" class=\"snarkTorrentStatus " + rowClass + "\">");
if (pct != 100.0) {
if (peer.isInterested() && !peer.isChoking()) {
out.write("<font color=#008000>");
out.write("<font size=-1>" + formatSize(peer.getUploadRate()) + "ps</font></font>");
} else {
out.write("<font color=#a00000><font size=-1><a title=\"");
if (!peer.isInterested())
out.write("Uninterested\">");
else
out.write("Choking\">");
out.write(formatSize(peer.getUploadRate()) + "ps</a></font></font>");
}
}
out.write("</td>\n\t");
out.write("<td class=\"snarkTorrentStatus " + rowClass + "\">");
out.write("</td></tr>\n\t");
if (showDebug)
out.write("<tr><td colspan=\"8\" align=\"right\" class=\"snarkTorrentStatus " + rowClass + "\">" + peer.getSocket() + "</td></tr>");
}
}
}
private void writeAddForm(PrintWriter out, HttpServletRequest req) throws IOException {
String uri = req.getRequestURI();
String newURL = req.getParameter("newURL");
if ( (newURL == null) || (newURL.trim().length() <= 0) ) newURL = "http://";
String newFile = req.getParameter("newFile");
if ( (newFile == null) || (newFile.trim().length() <= 0) ) newFile = "";
out.write("<span class=\"snarkNewTorrent\">\n");
// *not* enctype="multipart/form-data", so that the input type=file sends the filename, not the file
out.write("<form action=\"" + uri + "\" method=\"POST\">\n");
out.write("<input type=\"hidden\" name=\"nonce\" value=\"" + _nonce + "\" />\n");
out.write("<span class=\"snarkConfigTitle\">Add Torrent:</span><br />\n");
out.write("From URL&nbsp;: <input type=\"text\" name=\"newURL\" size=\"80\" value=\"" + newURL + "\" /> \n");
// not supporting from file at the moment, since the file name passed isn't always absolute (so it may not resolve)
//out.write("From file: <input type=\"file\" name=\"newFile\" size=\"50\" value=\"" + newFile + "\" /><br />\n");
out.write("<input type=\"submit\" value=\"Add torrent\" name=\"action\" /><br />\n");
out.write("<span class=\"snarkAddInfo\">Alternately, you can copy .torrent files to " + _manager.getDataDir().getAbsolutePath() + "<br />\n");
out.write("Removing that .torrent file will cause the torrent to stop.<br /></span>\n");
out.write("</form>\n</span>\n");
}
private void writeSeedForm(PrintWriter out, HttpServletRequest req) throws IOException {
String uri = req.getRequestURI();
String baseFile = req.getParameter("baseFile");
if (baseFile == null)
baseFile = "";
out.write("<span class=\"snarkNewTorrent\"><hr />\n");
// *not* enctype="multipart/form-data", so that the input type=file sends the filename, not the file
out.write("<form action=\"" + uri + "\" method=\"POST\">\n");
out.write("<input type=\"hidden\" name=\"nonce\" value=\"" + _nonce + "\" />\n");
out.write("<span class=\"snarkConfigTitle\">Create Torrent:</span><br />\n");
//out.write("From file: <input type=\"file\" name=\"newFile\" size=\"50\" value=\"" + newFile + "\" /><br />\n");
out.write("Data to seed: " + _manager.getDataDir().getAbsolutePath() + File.separatorChar
+ "<input type=\"text\" name=\"baseFile\" size=\"20\" value=\"" + baseFile
+ "\" title=\"File to seed (must be within the specified path)\" /><br />\n");
out.write("Tracker: <select name=\"announceURL\"><option value=\"\">Select a tracker</option>\n");
Map trackers = _manager.getTrackers();
for (Iterator iter = trackers.keySet().iterator(); iter.hasNext(); ) {
String name = (String)iter.next();
String announceURL = (String)trackers.get(name);
int e = announceURL.indexOf('=');
if (e > 0)
announceURL = announceURL.substring(0, e);
out.write("\t<option value=\"" + announceURL + "\">" + name + "</option>\n");
}
out.write("</select>\n");
out.write("or <input type=\"text\" name=\"announceURLOther\" size=\"50\" value=\"http://\" " +
"title=\"Custom tracker URL\" /> ");
out.write("<input type=\"submit\" value=\"Create torrent\" name=\"action\" />\n");
out.write("</form>\n</span>\n");
}
private void writeConfigForm(PrintWriter out, HttpServletRequest req) throws IOException {
String uri = req.getRequestURI();
String dataDir = _manager.getDataDir().getAbsolutePath();
boolean autoStart = _manager.shouldAutoStart();
boolean useOpenTrackers = _manager.shouldUseOpenTrackers();
String openTrackers = _manager.getOpenTrackerString();
//int seedPct = 0;
out.write("<form action=\"" + uri + "\" method=\"POST\">\n");
out.write("<span class=\"snarkConfig\"><hr />\n");
out.write("<input type=\"hidden\" name=\"nonce\" value=\"" + _nonce + "\" />\n");
out.write("<span class=\"snarkConfigTitle\">Configuration:</span><br />\n");
out.write("Data directory: <input type=\"text\" size=\"40\" name=\"dataDir\" value=\"" + dataDir + "\" ");
out.write("title=\"Directory to store torrents and data\" disabled=\"true\" /> <i>(Edit i2psnark.config and restart to change)</i><br />\n");
out.write("Auto start: <input type=\"checkbox\" name=\"autoStart\" value=\"true\" "
+ (autoStart ? "checked " : "")
+ "title=\"If true, automatically start torrents that are added\" />");
//Auto add: <input type="checkbox" name="autoAdd" value="true" title="If true, automatically add torrents that are found in the data directory" />
//Auto stop: <input type="checkbox" name="autoStop" value="true" title="If true, automatically stop torrents that are removed from the data directory" />
//out.write("<br />\n");
/*
out.write("Seed percentage: <select name=\"seedPct\" disabled=\"true\" >\n\t");
if (seedPct <= 0)
out.write("<option value=\"0\" selected=\"true\">Unlimited</option>\n\t");
else
out.write("<option value=\"0\">Unlimited</option>\n\t");
if (seedPct == 100)
out.write("<option value=\"100\" selected=\"true\">100%</option>\n\t");
else
out.write("<option value=\"100\">100%</option>\n\t");
if (seedPct == 150)
out.write("<option value=\"150\" selected=\"true\">150%</option>\n\t");
else
out.write("<option value=\"150\">150%</option>\n\t");
out.write("</select><br />\n");
*/
out.write("Total uploader limit: <input type=\"text\" name=\"upLimit\" value=\""
+ I2PSnarkUtil.instance().getMaxUploaders() + "\" size=\"3\" maxlength=\"3\" /> peers<br />\n");
out.write("Up bandwidth limit: <input type=\"text\" name=\"upBW\" value=\""
+ I2PSnarkUtil.instance().getMaxUpBW() + "\" size=\"3\" maxlength=\"3\" /> KBps <i>(Router Up BW / 2 recommended)</i><br />\n");
out.write("Use open trackers also: <input type=\"checkbox\" name=\"useOpenTrackers\" value=\"true\" "
+ (useOpenTrackers ? "checked " : "")
+ "title=\"If true, uses open trackers in addition\" /> ");
out.write("Announce URLs: <input type=\"text\" name=\"openTrackers\" value=\""
+ openTrackers + "\" size=\"50\" /><br />\n");
//out.write("<hr />\n");
out.write("EepProxy host: <input type=\"text\" name=\"eepHost\" value=\""
+ I2PSnarkUtil.instance().getEepProxyHost() + "\" size=\"15\" /> ");
out.write("port: <input type=\"text\" name=\"eepPort\" value=\""
+ I2PSnarkUtil.instance().getEepProxyPort() + "\" size=\"5\" maxlength=\"5\" /><br />\n");
out.write("I2CP host: <input type=\"text\" name=\"i2cpHost\" value=\""
+ I2PSnarkUtil.instance().getI2CPHost() + "\" size=\"15\" /> ");
out.write("port: <input type=\"text\" name=\"i2cpPort\" value=\"" +
+ I2PSnarkUtil.instance().getI2CPPort() + "\" size=\"5\" maxlength=\"5\" /> <br />\n");
StringBuffer opts = new StringBuffer(64);
Map options = new TreeMap(I2PSnarkUtil.instance().getI2CPOptions());
for (Iterator iter = options.keySet().iterator(); iter.hasNext(); ) {
String key = (String)iter.next();
String val = (String)options.get(key);
opts.append(key).append('=').append(val).append(' ');
}
out.write("I2CP opts: <input type=\"text\" name=\"i2cpOpts\" size=\"80\" value=\""
+ opts.toString() + "\" /><br />\n");
out.write("<input type=\"submit\" value=\"Save configuration\" name=\"action\" />\n");
out.write("</span>\n");
out.write("</form>\n");
}
// rounding makes us look faster :)
private String formatSize(long bytes) {
if (bytes < 5*1024)
return bytes + "B";
else if (bytes < 5*1024*1024)
return ((bytes + 512)/1024) + "KB";
else if (bytes < 5*1024*1024*1024l)
return ((bytes + 512*1024)/(1024*1024)) + "MB";
else
return ((bytes + 512*1024*1024)/(1024*1024*1024)) + "GB";
}
private static final String HEADER_BEGIN = "<html>\n" +
"<head>\n" +
"<title>I2PSnark - anonymous bittorrent</title>\n";
private static final String HEADER = "<style>\n" +
"body {\n" +
" background-color: #C7CFB4;\n" +
"}\n" +
".snarkTitle {\n" +
" text-align: left;\n" +
" float: left;\n" +
" margin: 0px 0px 5px 5px;\n" +
" display: inline;\n" +
" font-size: 16pt;\n" +
" font-weight: bold;\n" +
"}\n" +
".snarkRefresh {\n" +
" font-size: 10pt;\n" +
"}\n" +
".snarkMessages {\n" +
" border: none;\n" +
" background-color: #CECFC6;\n" +
" font-family: monospace;\n" +
" font-size: 10pt;\n" +
" font-weight: 100;\n" +
" width: 100%;\n" +
" text-align: left;\n" +
" margin: 0px 0px 0px 0px;\n" +
" border: 0px;\n" +
" padding: 5px;\n" +
" border-width: 0px;\n" +
" border-spacing: 0px;\n" +
"}\n" +
"table {\n" +
" margin: 0px 0px 0px 0px;\n" +
" border: 0px;\n" +
" padding: 0px;\n" +
" border-width: 0px;\n" +
" border-spacing: 0px;\n" +
"}\n" +
"th {\n" +
" background-color: #C7D5D5;\n" +
" padding: 0px 7px 0px 3px;\n" +
"}\n" +
"td {\n" +
" padding: 0px 7px 0px 3px;\n" +
"}\n" +
".snarkTorrentEven {\n" +
" background-color: #E7E7E7;\n" +
"}\n" +
".snarkTorrentOdd {\n" +
" background-color: #DDDDCC;\n" +
"}\n" +
".snarkNewTorrent {\n" +
" font-size: 10pt;\n" +
"}\n" +
".snarkAddInfo {\n" +
" font-size: 10pt;\n" +
"}\n" +
".snarkConfigTitle {\n" +
" font-size: 12pt;\n" +
" font-weight: bold;\n" +
"}\n" +
".snarkConfig {\n" +
" font-size: 10pt;\n" +
"}\n" +
"</style>\n" +
"</head>\n" +
"<body>\n";
private static final String TABLE_HEADER = "<table border=\"0\" class=\"snarkTorrents\" width=\"100%\" cellpadding=\"0 10px\">\n" +
"<thead>\n" +
"<tr><th align=\"left\" valign=\"top\">Status \n";
private static final String TABLE_HEADER2 = "</th>\n" +
" <th align=\"left\" valign=\"top\">Torrent</th>\n" +
" <th align=\"right\" valign=\"top\">ETA</th>\n" +
" <th align=\"right\" valign=\"top\">Downloaded</th>\n" +
" <th align=\"right\" valign=\"top\">Uploaded</th>\n" +
" <th align=\"right\" valign=\"top\">Down Rate</th>\n" +
" <th align=\"right\" valign=\"top\">Up Rate</th>\n";
private static final String TABLE_TOTAL = "<tfoot>\n" +
"<tr><th align=\"left\" valign=\"top\">Totals</th>\n" +
" <th>&nbsp;</th>\n" +
" <th>&nbsp;</th>\n";
private static final String TABLE_EMPTY = "<tr class=\"snarkTorrentEven\">" +
"<td class=\"snarkTorrentEven\" align=\"left\"" +
" valign=\"top\" colspan=\"8\">No torrents</td></tr>\n";
private static final String TABLE_FOOTER = "</table>\n";
private static final String FOOTER = "</body></html>";
}
class FetchAndAdd implements Runnable {
private SnarkManager _manager;
private String _url;
public FetchAndAdd(SnarkManager mgr, String url) {
_manager = mgr;
_url = url;
}
public void run() {
_url = _url.trim();
// 3 retries
File file = I2PSnarkUtil.instance().get(_url, false, 3);
try {
if ( (file != null) && (file.exists()) && (file.length() > 0) ) {
_manager.addMessage("Torrent fetched from " + _url);
FileInputStream in = null;
try {
in = new FileInputStream(file);
MetaInfo info = new MetaInfo(in);
String name = info.getName();
name = name.replace('/', '_');
name = name.replace('\\', '_');
name = name.replace('&', '+');
name = name.replace('\'', '_');
name = name.replace('"', '_');
name = name.replace('`', '_');
name = name + ".torrent";
File torrentFile = new File(_manager.getDataDir(), name);
String canonical = torrentFile.getCanonicalPath();
if (torrentFile.exists()) {
if (_manager.getTorrent(canonical) != null)
_manager.addMessage("Torrent already running: " + name);
else
_manager.addMessage("Torrent already in the queue: " + name);
} else {
FileUtil.copy(file.getAbsolutePath(), canonical, true);
_manager.addTorrent(canonical);
}
} catch (IOException ioe) {
_manager.addMessage("Torrent at " + _url + " was not valid: " + ioe.getMessage());
} finally {
try { in.close(); } catch (IOException ioe) {}
}
} else {
_manager.addMessage("Torrent was not retrieved from " + _url);
}
} finally {
if (file != null) file.delete();
}
}
}

View File

@ -0,0 +1,48 @@
package org.klomp.snark.web;
import java.io.File;
import net.i2p.util.FileUtil;
import org.mortbay.jetty.Server;
public class RunStandalone {
private Server _server;
static {
System.setProperty("org.mortbay.http.Version.paranoid", "true");
System.setProperty("org.mortbay.xml.XmlParser.NotValidating", "true");
}
private RunStandalone(String args[]) {}
public static void main(String args[]) {
RunStandalone runner = new RunStandalone(args);
runner.start();
}
public void start() {
File workDir = new File("work");
boolean workDirRemoved = FileUtil.rmdir(workDir, false);
if (!workDirRemoved)
System.err.println("ERROR: Unable to remove Jetty temporary work directory");
boolean workDirCreated = workDir.mkdirs();
if (!workDirCreated)
System.err.println("ERROR: Unable to create Jetty temporary work directory");
try {
_server = new Server("jetty-i2psnark.xml");
_server.start();
} catch (Exception e) {
e.printStackTrace();
}
}
public void stop() {
try {
_server.stop();
} catch (InterruptedException ie) {
ie.printStackTrace();
}
}
}

View File

@ -0,0 +1,114 @@
<?xml version="1.0" encoding="ISO-8859-1" ?>
<!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure 1.2//EN" "http://jetty.mortbay.org/configure_1_2.dtd">
<!-- =============================================================== -->
<!-- Configure the Jetty Server -->
<!-- =============================================================== -->
<Configure class="org.mortbay.jetty.Server">
<!-- =============================================================== -->
<!-- Configure the Request Listeners -->
<!-- =============================================================== -->
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
<!-- Add and configure a HTTP listener to port 8080 -->
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
<Call name="addListener">
<Arg>
<New class="org.mortbay.http.SocketListener">
<Arg>
<New class="org.mortbay.util.InetAddrPort">
<Set name="host">127.0.0.1</Set>
<Set name="port">8002</Set>
</New>
</Arg>
<Set name="MinThreads">3</Set>
<Set name="MaxThreads">10</Set>
<Set name="MaxIdleTimeMs">30000</Set>
<Set name="LowResourcePersistTimeMs">1000</Set>
<Set name="ConfidentialPort">8443</Set>
<Set name="IntegralPort">8443</Set>
<Set name="PoolName">main</Set>
</New>
</Arg>
</Call>
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
<!-- Add a HTTPS SSL listener on port 8443 -->
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
<!-- UNCOMMENT TO ACTIVATE
<Call name="addListener">
<Arg>
<New class="org.mortbay.http.SunJsseListener">
<Set name="Port">8443</Set>
<Set name="PoolName">main</Set>
<Set name="Keystore"><SystemProperty name="jetty.home" default="."/>/etc/demokeystore</Set>
<Set name="Password">OBF:1vny1zlo1x8e1vnw1vn61x8g1zlu1vn4</Set>
<Set name="KeyPassword">OBF:1u2u1wml1z7s1z7a1wnl1u2g</Set>
<Set name="NonPersistentUserAgent">MSIE 5</Set>
</New>
</Arg>
</Call>
-->
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
<!-- Add a AJP13 listener on port 8009 -->
<!-- This protocol can be used with mod_jk in apache, IIS etc. -->
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
<!--
<Call name="addListener">
<Arg>
<New class="org.mortbay.http.ajp.AJP13Listener">
<Set name="PoolName">ajp</Set>
<Set name="Port">8009</Set>
<Set name="MinThreads">3</Set>
<Set name="MaxThreads">20</Set>
<Set name="MaxIdleTimeMs">0</Set>
<Set name="confidentialPort">443</Set>
</New>
</Arg>
</Call>
-->
<!-- =============================================================== -->
<!-- Configure the Contexts -->
<!-- =============================================================== -->
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
<!-- Add a all web application within the webapps directory. -->
<!-- + No virtual host specified -->
<!-- + Look in the webapps directory relative to jetty.home or . -->
<!-- + Use the default webdefault.xml in jetty's install -->
<!-- + Upack the war file -->
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
<Set name="rootWebApp">i2psnark</Set>
<Call name="addWebApplication">
<Arg>/</Arg>
<Arg>webapps/i2psnark.war</Arg>
</Call>
<!-- =============================================================== -->
<!-- Configure the Request Log -->
<!-- =============================================================== -->
<Set name="RequestLog">
<New class="org.mortbay.http.NCSARequestLog">
<Arg>./logs/yyyy_mm_dd.i2psnark-request.log</Arg>
<Set name="retainDays">90</Set>
<Set name="append">true</Set>
<Set name="extended">false</Set>
<Set name="buffered">false</Set>
<Set name="LogTimeZone">GMT</Set>
</New>
</Set>
<!-- =============================================================== -->
<!-- Configure the Other Server Options -->
<!-- =============================================================== -->
<Set name="requestsPerGC">2000</Set>
<Set name="statsOn">false</Set>
</Configure>

View File

@ -0,0 +1,6 @@
To run I2PSnark from the command line, run "java -jar lib/i2psnark.jar", but
to run it with the web UI, run "java -jar launch-i2psnark.jar". I2PSnark is
GPL'ed software, based on Snark (http://www.klomp.org/) to run on top of I2P
(http://www.i2p.net/) within a webserver (such as the bundled Jetty from
http://jetty.mortbay.org/). For more information about I2PSnark, get in touch
with the folks at http://forum.i2p.net/

11
apps/i2psnark/readme.txt Normal file
View File

@ -0,0 +1,11 @@
This is an I2P port of snark [http://klomp.org/snark], a GPL'ed bittorrent client
The build in tracker has been removed for simplicity.
Example usage:
java -jar lib/i2psnark.jar myFile.torrent
or, a more verbose setting:
java -jar lib/i2psnark.jar --eepproxy 127.0.0.1 4444 \
--i2cp 127.0.0.1 7654 "inbound.length=2 outbound.length=2" \
--debug 6 myFile.torrent

View File

@ -0,0 +1,141 @@
The Hunting of the Snark Project - BitTorrent Application Suite
0.5 - The Beaver's Lesson (27 June 2003)
"It's a Snark!" was the sound that first came to their ears,
And seemed almost too good to be true.
Then followed a torrent of laughter and cheers:
Then the ominous words "It's a Boo-"
-- from The Hunting Of The Snark by Lewis Carroll
Snark is a client for downloading and sharing files distributed with
the BitTorrent protocol. It is mainly used for exploring the BitTorrent
protocol and experimenting with the the GNU Compiler for Java (gcj).
But it can also be used as a regular BitTorrent Client.
Snark can also act as a torrent creator, micro http server for delivering
metainfo.torrent files and has an integrated Tracker for making sharing of
files as easy as possible.
When you give the option --share Snark will automatically
create a .torrent file, start a very simple webserver to distribute
the metainfo.torrent file and a local tracker that other BitTorrent
clients can connect to.
Distribution
------------
Copyright (C) 2003 Mark J. Wielaard
Snark is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Requirements/Installation
-------------------------
The GNU Compiler for java (gcj) version 3.3 or later.
(Earlier versions have a faulty SHA message digest implementation.)
On Debian GNU/Linux based distributions just install the gcj-3.3 package.
Edit the GCJ variable in the Makefile if your gcj binary is not gcj-3.3.
Typing 'make' will create the native snark binary and a snark.jar file
for use with traditional java byte code interpreters.
It is possible to compile the sources with other java compilers
like jikes or kjc to produce the snark.jar file. Edit the JAVAC and
JAVAC_FLAGS variables on top of the Makefile for this. And type
'make snark.jar' to create a jar file that can be used by traditional
java bytecode interpreters like kaffe: 'kaffe -jar snark.jar'.
You will need at least version 1.1 of kaffe for all functionality to work
correctly ('--share' does not work with older versions).
When trying out the experimental Gnome frontend you also need the java-gnome
bindings. On Debian GNU/Linux systems install the package libgnome0-java.
You can try it out by typing 'make snark-gnome' and then run 'snark-gnome.sh'
like you would with the normal command line client.
Running
-------
To use the program start it with:
snark [--debug [level]] [--no-commands] [--port <port>]
[--share (<ip>|<host>)] (<url>|<file>|<dir>)
--debug Shows some extra info and stacktraces.
level How much debug details to show
(defaults to 3, with --debug to 4, highest level is 6).
--no-commands Don't read interactive commands or show usage info.
--port The port to listen on for incomming connections
(if not given defaults to first free port between 6881-6889).
--share Start torrent tracker on <ip> address or <host> name.
<url> URL pointing to .torrent metainfo file to download/share.
<file> Either a local .torrent metainfo file to download
or (with --share) a file to share.
<dir> A directory with files to share (needs --share).
Since this is an early beta release there are probably still some bugs
in the program. To help find them run the program with the --debug
option which shows more information on what it going on. You can also give
the level of debug output you want. Zero will give (almost) no output at all.
Everything above debug level 4 is probably to much (only really useful to
see what goes on on the protocol/network level).
Examples
- To simple start downloading/sharing a file.
Either download the .torrent file to disk and start snark with:
./snark somefile.torrent
Or give it the complete URL:
./snark http://somehost.example.com/cd-images/bbc-lnx.iso.torrent
- To start seeding/sharing a local file:
./snark --share my-host.example.com some-file
Snark will respond with:
Listening on port: 6881
Trying to create metainfo torrent for 'some-file'
Creating torrent piece hashes: ++++++++++
Torrent available on http://my-host.example.com:6881/metainfo.torrent
You can now point other people to the above URL so they can share
the file with their own BitTorrent client.
Commands
While the program is running in text mode you can currently give the
following commands: 'info', 'list' and 'quit'.
Interactive commands are disabled when the '--no-commands' flag is given.
This is sometimes desireable for running snark in the background.
More information
----------------
- The Evolution of Cooperation - Robert Axelrod
ISBN 0-465-02121-2
- The BitTorrent protocol description:
<http://bitconjurer.org/BitTorrent/protocol.html>
- The GNU Compiler for Java (gcj):
<http://gcc.gnu.org/java/>
- java-gnome bindings : <http://java-gnome.sourceforge.net/>
- The Hunting of the Snark - Lewis Carroll
Comments welcome
- Mark Wielaard <mark@klomp.org>

25
apps/i2psnark/web.xml Normal file
View File

@ -0,0 +1,25 @@
<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE web-app
PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.2//EN"
"http://java.sun.com/j2ee/dtds/web-app_2.2.dtd">
<web-app>
<servlet>
<servlet-name>org.klomp.snark.web.I2PSnarkServlet</servlet-name>
<servlet-class>org.klomp.snark.web.I2PSnarkServlet</servlet-class>
<load-on-startup>1</load-on-startup>
</servlet>
<!-- precompiled servlets -->
<servlet-mapping>
<servlet-name>org.klomp.snark.web.I2PSnarkServlet</servlet-name>
<url-pattern>/</url-pattern>
</servlet-mapping>
<session-config>
<session-timeout>
30
</session-timeout>
</session-config>
</web-app>

Some files were not shown because too many files have changed in this diff Show More