![]() | ICANN-Accredited Registrars | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
ICANN currently accredits domain-name registrars for the following Top Level Domains:
The following companies have been accredited by ICANN to act as registrars in one or more TLDs:
Registrar contact information and descriptions are available at http://www.icann.org/registrars/accreditation-qualified-list.html. Comments concerning the layout, construction and functionality of this site should be sent to webmaster@icann.org. Page Updated Friday, 07-December-2007 |
10.12.07
ICANN currently accredits these domain-name registrars
8.12.07
Multiple Transfers Using libcurl multi Interface
7.12.07
Curiosity is bliss: "Take It With You" Wiki
January 17, 2006
"Take It With You" Wiki
Although this blog has been silent for a while, I haven't been idle. I was working on AJAX-based web application with transparent support for disconnected operations.
TiwyWiki is a prototype wiki that runs both online and offline without any install (besides Flash Player).
Here's a demonstration scenario:
- load the demo (requires Flash 8) and browse a couple of pages,
- pull the network plug off your computer and put your browser in Offline mode,
- re-open the wiki using the same url,
- while offline, continue reading and editing the cached pages of the wiki, create new pages,
- go back online and sync your updates back to the server.
I've only tested TiwyWiki on IE and Firefox on Windows, and heard that it runs properly on Mac (Safari I think), let me know if it runs for you on any other platform.
This is just the skeleton of a wiki, but it gives a feeling of the possibilities of web applications that can deal gracefully with being intermittantly disconnected. I'm especially interested in hearing back about whether this approach is valuable to you, in comparison to the traditional web and rich client models.
What other applications you'd find most appealing and why?
Here are the ones I brainstormed so far: a personal wiki, various other personal or group GTD tools (such as todo list or calendar), a community wiki, an email reader and/or composer, a blog editor, an RSS reader, an app for driving directions.
Some background:
Two problems ran in circles in my head while I was on vacation a couple of weeks ago: how to make cross-domain XMLHttp requests before cross-domain is actually supported by browsers and how to allow web applications to run offline?I started by focusing on the first one, probably because I've been toying recently with cross-domain XMLHttp and client-side storage through Greasemonkey. Also, I was thinking that it would help for running local/offline copies of web apps.
The problem with using Greasemonkey to extend the browser is that it's not widely available and it doesn't offer good control over cross-domain requests. A Flash and Javascript combination, such as the Flash-based Canvas or AMASS storage, seemed like a better solution.
As I learnt more about Flash 8 and its security model, my original plan of running a local copy of a web page for offline use didn't seem convenient enough: you would have to explicitly save the app locally and synch it before going offline.
When I found out that Flash Player did cache Flash apps properly, the idea of running both the online and offline scenarios took the lead. This avoided the new security restrictions for local apps in Flash 8, keeping two local caches of the data (one for the online domain and one for the local copy) and no installation problem.
Instead you would be able to use the app locally as soon as you used it online. First, whatever content you had already accessed would be cached and persisted locally (in the Flash app/storage). You could use pre-fetching to ensure your local cache would have the data that you want.
Second, the Flash app would act as a buffer for disconnected operations, such as local updates while running offline.
Design philosophy:
One interesting thing to realize is how and why the pieces fit together.As a starting point, you should understand that the AJAX trend is not simply about rich UI and eye candy, but more generally about providing a more responsive experience by optimizing the bottleneck resource (the network): you cache the data that doesn't change (some HTML, Javascript or CSS), and transfer only the information that is dynamic.
Once you have a web application that is entirely cacheable, you can support offline operations. You just need to have all the dynamic data go through a smart proxy that can do disconnected reads and updates.
That's where Flash comes into play, as it offers large persistent local storage and easy interfacing with Javascript.
I don't see Flash as the long term solution, but rather a temporary workaround that allows for some early experimentation. Instead of waiting for new browser infrastructures, I wanted to demonstrate that web apps with offline support and no install were already feasible, relying only on a new combination of existing techniques.
That's why I tried to keep the extensions to the browser as clean and simple as possible, minimizing the amount of Flash and relying more on the common skillset (HTML+Javascript). I think this will motivate other developpers to try this approach.
In this case, Flash actually turned out to be rather un-obstrusive.
First, if you don't have it installed, the web app will still work fine, except with no offline support or persistent data caching.
Second, Flash offers some benefits that I hadn't anticipated. For example, the storage is shared between IE and Firefox. This makes for a nicer experience that I would expect from any native browser API, such as IE's client storage API or the drafted storage API from the WHAT working group.
For those who want to avoid Flash, other alternative storage techniques could possibly be used to achieve similar results, such as IE's storage API, a Java applet, an ActiveX object or some other kind of browser extension.
In the long run, I hope this proof of concept and the following uses of this technique will help identify the right set of APIs to implement natively in browsers.
Caching:
Caching is at the heart of this solution and needs to be configured properly. When the expiration header (Expires, using mod_expires in Apache or directly in IIS) is correctly set for all the static content, both Firefox and IE let you run the application offline without complaining.Overall, IE appears to be more sensitive to mis-configured caching headers and in that case, it would often display some prompts to work offline or return online to continue the current operation.
Loading Flash when offline:
During a troubleshooting session, I noticed something unexpected. The common markup for including Flash objects in IE actually causes a request to Macromedia, which usually replies with a 302 (but no caching headers).
Besides my surprise of discovering that Macromedia's server is hit every time a Flash app is opened in IE, this meant that the Flash object wouldn't load offline. So TiwyWiki uses its own Flash loading technique (yet another) to support running offline.
Busting the cache:
One downside of forcing the application to be cached is that if a new version of the application becomes available, the browser won't notice it until the current version expires from the cache.
I'm still looking for some ideas on how to let the application deal with this update scenario, so that it could have some logic to check for updates and trick the browser into reloading its cache (force refresh). There may be some solutions by using the XMLHttp API with the right request headers, if the different browser could cache the responses properly.
As a last resort, one could imagine a new browser API that would allow invalidating the cache for a given domain and path.
Locking files in the cache:
The other problem with running the application out of the browser's cache is that the user could "uninstall" the application by accidentally clearing the cache or the application could erased from the cache to make room when the cache gets filled up.
I'm still looking for ideas on how to achieve proper locking of the files in the cache.
In IE, that should be possible using the "Offline Favorites" feature. Whenever you bookmark a page, IE gives you the option to "Make [the favorite] available offline". If you check that option, IE will use a crawler (MSIECrawler) to pre-fetch and cache the content for offline reading. You can hint the crawler using a CDF file, linked from a tag.
But I implemented and ran various experiments with "Offline Favorites", and couldn't get the files to be properly frozen in the cache (they would still get scavenged to make room).
Making a framework:
A wiki turned out to be a rather complex application in terms of synchronization and error handling. I originally wanted to write a generic framework for occasionally connected web applications, to deal with these problems.But besides the reusable Flash component, most of the code so far is specific to the schema and synchronization model for the application. My work on a second application (an RSS reader) hasn't helped me bubble the right abstractions yet.
Do you know any generic synchronization framework which could be ported or mimicked in Javascript? Something like TrimQuery would be great if it supported INSERT and UPDATE.
Also, is there some existing libraries that would offer a rich logical view of a persistent storage that only supports sets of name-value pairs?
Developing with Flash:
This was my first time working with Flash and overall I found it easier than expected. ActionScript is a sibling of Javascript (both follow the ECMAScript specification), which made it easy to pick up. I was happy to interact with Flash authoring tools as little as possible and end up building the Flash component completely using the MTASC compiler.I haven't met too many problems with the Flash APIs. ExternalInterface is quite convenient, although I've had to work around a performance issue when passing large data accross.
I wouldn't expect too much performance of the storage API, SharedObject, which serializes objects into files. But this hasn't been a problem so far.
Open problems:
Besides the problems already mentioned (building a richer storage abstraction, building a generic synchronization framework, getting more control on the caching), I've hit my head on trying to fix the back button behavior in IE.The usual hacks rely on iframes pointing to a blank html page on the server, with some unique querystring parameters. Unfortunately, such queries don't work offline, because the unique querystring values essentially keep busting the cache.
I've also encountered some weird issues with Flash in Firefox 1.5, showing "Bad NPObject as private data" in the Javascript console and sometimes popping up warnings that an extension mis-behaved. My guess at this point is that it was some interaction between Flash and some other extension, possibly AdBlock.
And finally, I'm still battling some memory leaks issue. Although the code does use closures quite a bit, I can't see how it would create circular references chains between the DOM and the Javascript engine.
Related:
- Adam's Bosworth Alchemy project and famous post on modifying information offline,
- TiddlyWiki, a SPA (Single Page Application) wiki, that runs entirely in the browser (no need for a server) and uses a funky "File->Save Page" persistence model,
- The TODO list for TiwyWiki and the internal documentation (explains a bit about the storage structure and different tiers in the APIs).
Java Virtual Machine (JVM) - JVM virtual memory footprint
drops on java.net), the default maximum size of the generated code cache (it contains
the interpreter and code generated by the compilers) is 1 gigabyte. At startup, the jvm
attempts to reserve that amount of virtual address space for the code cache. Note that this
is a 'reservation' as opposed to a 'commitment'. Only 1 megabyte of physical memory
and corresponding swap space is actually allocated during jvm initialization.
We did this because reserved, but uncommitted, virtual address space is basically 'free'
when running a 64-bit jvm under a 64-bit operating system. But we realized during
mustang development that scenarios such as you describe were possible and reduced
the default maximum code cache size to 48 mb in the 64-bit vm's. We also raised the
initial committed size to around 2.5 mb. We're in the process of backporting that change
to 1.4.2 and 5.0.
In the meantime, you can override the default using the ReservedCodeCacheSize switch, e.g.,
-XX:ReservedCodeCacheSize=48m
which will reserve the mustang default.
Thanks for pointing out the problem.
Paul
I was wondering why java processes (even the simplest
ones) would take up Gigabytes of virtual memory on an
AMD64 machine? If I ulimit virtual memory usage to
1GB I get:
Error occurred during initialization of VM
Could not reserve enough space for code cache
even for a simple "java -version". The JDK I'm using
is
Java(TM) 2 Runtime Environment, Standard Edition
(build 1.5.0_04-b05)
Java HotSpot(TM) 64-Bit Server VM (build
1.5.0_04-b05, mixed mode)
I'm running Red Hat Enterprise Linux AS 4.0 on a dual
Xeon box with 2GB.
Cheers, Alex
Could not reserve enough space for object heap
What does ``Could not reserve enough space for object heap'' mean?
======================================================================
If you try to run the java compiler or the java virtual machine and you
encounter an error such as
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
then you have tried to run the command from a non-login shell. The error is
due to the proper ulimit resources not being set for your current shell. If
the shell you run the command in was a login shell, then it would have sourced
/etc/profile and set the ulimits correctly.
To fix the problem, make sure you are in a login shell (for example, a
terminal started with the '-ls' option). Also, see 12.11 for similar
side-effects.
Advanced routing mini-HOWTO
Advanced routing mini-HOWTO
Timur A. Bolokhov, timur@tepkom.ru
This document describes new routing features of 2.1.X development and coming 2.2.X stable linux kernels. Among them are source-based routing and Network Address Translation (NAT).
Introduction
Somewhere in the middle of 2.1 development kernel series routing code was rewriten by Alexey Kuznetsov (kuznet@ms2.inr.ac.ru), many new features like policy(source)-based routing, Network Address Translation, scheduling etc were added. Networking is now managed by means of ip
, tc
and rtmon
utilities from iproute2
package. I hope this document will help novices to enter new conception.
Regrets
This document is written by a USER, even some basic notions can be incorrect. The ip
utility is very powerfull, as you can see by its syntax in appendix, only a little part of its possibilities is described. Hope that you can guess the rest. No word is said about cooperation with tc
and about tc
itself. No picture yet. Bad language, punctuation, general mistakes.
Preliminary reading
Suppose that you already have some experince with linux routing, or at least just studied NET-3 HOWTO, IP-Alias, IP-Subnetworking, IP-Masquerading, Proxy-Arp minis. Kernel-HOWTO will help you to compile new-featured kernel.
Where to find them
- The
iproute2
package is available in ftp://ftp.inr.ac.ru/ip-routing/ There is a mirror(s), but I couldnot even resolve it in DNS. May be the situation will change? - Howtos are as usual in
/usr/doc/
or in the nearest mirror of sunsite.unc.edu. - Utility
ipchains
is homed inhttp://www.adelaide.net.au/rustcorp/ipfwchains.
- This document: hope that current version will be somewhere under
ftp://post.tepkom.ru/pub/Linux/
Convention
Value standing in square brackets [ ] is just an option to smth.
Software
Author of this document is using 2.1.121 kernel withglibc-2.0.7
, iproute2-ss980827
along with gated-3.5.9
. Also iproute2-glibc2-patch??
was applied. This combination experienced only a week uptime, I couldnot test it longer.
How it was before
I'll try to remind you in brief routing conception from 2.0.X series kernels. When IP packet hits router's interface kernel, at first, applies to it rules from input firewall chain. Then if packet survives and in case that forwarding is enabled (/proc/sys/net/ipv4/ip_forward
is nonzero) it is being passed to another interface according to the routing table and forward firewall chain. Or just finish its way if its destination is one of the routers' interfaces. Normally routing table contain description of paths to all possible IP destinations. The latest are gathered in groups -- networks, each of them is uniquelly described by network adress (the first address in the group) and netmask (masklengh), which characterizes the number of adresses in the group ( is the right number). Routing table has two main columns:
DESTINATION: HOWTO_REACH_IT
Indeed, look at the example:
router># route -nWe have two network devices, three interfaces (without loopback) -- eth0, eth1 and an alias eth1:1, three networks connected directly, so we have 0.0.0.0 as gateway, one network connected behind the gateway 192.168.0.3 and a wise router 192.168.0.4 which knows how to forward packets to the rest part of the world. Routing table is scanned by kernel from top to bottom, when destination is found within some network (or there is special "host" entry for it) packet is forwarded to the specified gateway via corresponding interface.
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.1.32 0.0.0.0 255.255.255.224 U 0 0 12 eth1:1
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 34 eth0
192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 3 eth1
192.168.3.0 192.168.0.3 255.255.255.0 UG 1 0 8 eth0
127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 1 lo
0.0.0.0 192.168.0.4 0.0.0.0 UG 1 0 3 eth0
Note that networks are sorted strongly in the direction of decreasing of netmask (masklen), so that if a smaller network within a bigger one has its own gateway then it will appear higher in the table and have its chance to be routed correctly.
Now I want to remind you how to make such a table. Here is some base syntax:
ifconfig DEVICE [ADDRESS] [netmask MASK] [broadcast ADDR] [up,down]and the real commands:
route {add,del,flush} [-net,-host] [NETWORK] [netmask MASK] \
>[gw GATEWAY] [dev DEVICE]
router># ifconfig lo 127.0.0.1 netmask 255.0.0.0 broadcast 127.255.255.255 up
router># ifconfig eth0 192.168.0.1 netmask 255.255.255.0 up
router># ifconfig eth1 192.168.2.1 up
router># ifconfig eth1:1 192.168.1.35 netmask 255.255.255.224 \
> broadcast 192.168.1.63 up
router># route add -net 127.0.0.0 dev lo
router># route add -net 192.168.0.0 netmask 255.255.255.0 dev eth0
router># route add -net 192.168.2.0 dev eth1
router># route add -net 192.168.3.0 netmask 255.255.255.0 gw 192.168.0.3
router># route add -net 192.168.1.32 netmask 255.255.255.224 dev eth1:1
router># route add default gw 192.168.0.4
What it is now
Short description of a new routing mechanisms you can find in linux/Documentation/Policy-routing.txt
. Below I'll try give it in more detail.
Now we have not only one table (string) of correspondencies
DESTINATION: HOWTO_REACH_ITbut a set of such a tables (which are called classes in the document referenced above), each one being applied to the packets satisfying certain conditions. These conditions are set by means of
ip rule
syntaxis of ip
utility, while routing tables are filled by means of ip route
. There are three built-in tables (classes): local, main and default. Here we can see how they are bound by the rules: router># ip ruleRules are scanned by the kernel in order of their preferense (the number before semicolon), so in this initial setup for any arrived packet path to destination will be looked up, at first, in table local and if it's not found -- in tables main and default.
0: from all lookup local
32766: from all lookup main
32767: from all lookup default
When an interface has been configured with ifconfig
(or ip link
and ip addr
) host entries of its ip and broadcast addresses appear in the table local. Route to its attached network appears in the table main. All this is done automatically, you should not type no command now. To check up what do we have in table N
just type ip route list table N
.
Utilities ifconfig
and route
from net-tools are still available under 2.1.X, so set up from the previous section can readily be done as above (but without dealing with attached networks). Another variant is to use ip
:
router># ip link set eth0 upStatic and default routes from this example may have been also put to any other table which is looked up after table main (with preference greater than 32766). For example:
router># ip addr add 192.168.0.1/24 broadcast 192.168.0.255 \
> label eth0 dev eth0
router># ip link set eth1 up
router># ip addr add 192.168.2.1/24 broadcast 192.168.2.255 \
> label eth1 dev eth1
router># ip addr add 192.168.1.35/27 broadcast 192.168.1.63 \
> label eth1:1 dev eth1
router># ip route add 192.168.3.0/24 via 192.168.0.3 table main
router># ip route add 0/0 via 192.168.0.4 table main
router># ip route add 192.168.3.0/24 via 192.168.0.3 table 1so that
router># ip route add 0/0 via 192.168.0.4 table 2
router># ip rule add [from 0/0] table 1 pref 32800
router># ip rule add [from 0/0] table 2 pref 32810
ip rule
gives:
router># ip ruleBut we won't consider this variant below.
0: from all lookup local
32766: from all lookup main
32767: from all lookup default
32800: from all lookup 1
32810: from all lookup 2
So what's the difference of the new routing scheme from the previous one? The main is that ip packets now can be sorted with regards to their source address, TOS field, and may be in the future -- to special marks put on them by external classifier (like ipchains
). Suppose that we want in our example for the packets [with TOS 0x10 (minimum delay)] coming from 192.168.1.32/27 to be routed thruogh default gateway 192.168.0.5, then we type (after our interfaces are up):
router># ip route add 192.168.3.0/24 via 192.168.0.3 table mainRules now looks like this:
router># ip route add 0/0 via 192.168.0.5 table 3
router># ip route add 0/0 via 192.168.0.4 table 4
router># ip rule add from 192.168.1.32/27 [tos 0x10] table 3 pref 32900
router># ip rule add from 0/0 table 4 pref 32910
router># ip rule
0: from all lookup local
32766: from all lookup main
32767: from all lookup default
32900: from 192.168.1.32/27 [tos 0x10] lookup 3
32910: from all lookup 4
Similar setup may be usefull for organizations connected to the net through two or more ISPs via one linux gateway (of course, we shouldn't check here TOS field -- just route packets from network assigned by the second ISP to its gateway or ppp interface). It is even possible to make a script notice a problems in one link and redirect (in combination with NAT) critical outgoing connections to another ISPs link. This won't work for incoming calls as long as you do not change your DNS entries accordingly or have multihomed servers.
Here is a syntax for ipchains
to set the TOS field:
ipchains -A input -p PROTO -s SOURCE [port] -d DEST [port] -t 0x01 0x10
NATs
You should be extremely careful playing with NAT, even in a network with complex topology, routed by routing protocols or simply connected to other network through more than one router.Translation of a packet's destination address is always done in routing table local. The syntax is the following:
ip route add nat WHAT/MASKLEN via WHERE table localSo to translate all packets coming to 192.168.1.50 in the packets destinned to 192.168.2.25 you type:
router># ip route add nat 192.168.1.50 via 192.168.2.25 table localAnd to translate whole subnet 192.168.1.40/29 into 192.168.2.48/29 command is
router># ip route add nat 192.168.1.40/29 via 192.168.2.48 table local
Translation of source addresses should be set by means of rules:
ip rule add from REAL_SOURCE/MASKLEN nat PSEUDO_SOURCE table TABLEID
According to the routing conception ip packets comimg from REAL_SOURCE will translate their source addresses to PSEUDO_SOURCE and routed according to the table TABLEID. The translation will be valid only for the packets whos destination is in this table.
Let's illustrate it. Suppose that in our example 192.168.2.0/24 is an address space from ISP with gateway 192.168.0.4 and 192.168.1.32/27 is from ISP with gateway 192.168.0.5. We suddenly want to relink hosts in subnetwork 192.168.2.48/29 to another ISP. We have wisely reserved a spare subnet 192.168.1.40/29 for this. But we want no translation when 192.168.2.48/29 comes to local nets, especially to 192.168.1.0. Next commands provide our needs:
router># ip route add nat 192.168.1.40/29 via 192.168.2.48 table local(Remind that table 3 contains default gw 192.168.0.5). Our setup now is:
router># ip rule add from 192.168.2.48/29 nat 192.168.1.40 table 3 pref 32820
router># ip rule
0: from all lookup local
32766: from all lookup main
32767: from all lookup default
32820: from 192.168.2.48/29 nat 192.168.1.40 lookup 3
32900: from 192.168.1.32/27 lookup 3
32910: from all lookup 4
Want the same translation when going to 192.168.1.0 too? Ok, just type
router># ip rule add from 192.168.2.48/29 nat 192.168.1.40 table 5Then you'll get
router># ip rule add 192.168.1.0/24 via 192.168.0.3 table 5
router># ip rule
0: from all lookup local
32765: from 192.168.2.48/29 nat 192.168.1.40 lookup 5
32766: from all lookup main
32767: from all lookup default
32820: from 192.168.2.48/29 nat 192.168.1.40 lookup 3
32900: from 192.168.1.32/27 lookup 3
32910: from all lookup 4
Note that you should allways think of where your rule appears in the list, i.e. control its preference. Otherwise result may be very confusing. Guess why we couldnot just put the route to 192.168.1.0/24 into table 3 with
router># ip rule add 192.168.1.0/24 via 192.168.0.3 table 5instead of last two
ip rule add ...
and ip route add ...
? Hope that those imaginary examples will help to organize your real system.
Appendix
Full syntax ofip
utility is gathered here
ip
Usage: ip [ OPTIONS ] OBJECT { COMMAND | help }
where OBJECT := { link | addr | route | rule | neigh | tunnel }
OPTIONS := { -s[tatistics] | -f[amily] { inet | inet6 }}
ip link
Usage: ip link set DEVICE { up | down | arp { on | off } |
multicast { on | off } | txqueuelen PACKETS |
name NEWNAME }
ip link show [ DEVICE ]
ip addr
Usage: ip addr [ add | del ] IFADDR dev STRING
ip addr show [ dev STRING ] [ ipv4 | ipv6 | link | all ] [txqueuelen]
IFADDR := PREFIX [ local ADDR ]
[ broadcast ADDR ] [ anycast ADDR ]
[ label STRING ] [ scope SCOPE ]
SCOPE := [ host | link | global | NUMBER ]
ip route
Usage: ip route list SELECTOR
ip route { change | del | add | append | replace | monitor } ROUTE
SELECTOR := [ root PREFIX ] [ match PREFIX ] [ exact PREFIX ]
[ table TABLE_ID ] [ proto RTPROTO ]
[ type TYPE ] [ scope SCOPE ]
ROUTE := NODE_SPEC [ INFO_SPEC ]
NODE_SPEC := [ TYPE ] PREFIX [ tos TOS ]
[ table TABLE_ID ] [ proto RTPROTO ]
[ type TYPE ] [ scope SCOPE ]
INFO_SPEC := NH OPTIONS FLAGS [ nexthop NH ]...
NH := [ via ADDRESS ] [ dev STRING ] [ weight NUMBER ] NHFLAGS
OPTIONS := FLAGS [ mtu NUMBER ] [ rtt NUMBER ] [ window NUMBER ]
[ flowid CLASSID ]
TYPE := [ unicast | local | broadcast | multicast | throw |
unreachable | prohibit | blackhole | nat ]
TABLE_ID := [ local | main | default | all | NUMBER ]
SCOPE := [ host | link | global | NUMBER ]
NHFLAGS := [ onlink | pervasive ]
RTPROTO := [ kernel | boot | static | NUMBER ]
ip rule
Usage: ip rule [ list | add | del ] SELECTOR ACTION
SELECTOR := [ from PREFIX ] [ to PREFIX ] [ tos TOS ]
[ dev STRING ] [ pref NUMBER ]
ACTION := [ table TABLE_ID ] [ nat ADDRESS ]
[ prohibit | reject | unreachable ]
[ flowid CLASSID ]
TABLE_ID := [ local | main | default | new | NUMBER ]
ip neigh
Usage: ip neigh { add | del } { ADDR [ lladdr LLADDR ]
[ nud { permanent | noarp | stale | reachable } ]
| proxy ADDR } [ dev DEVICE ]
ip neigh show [ ipv4 | ipv6 | all ]
ip tunnel
Usage: ip tunnel { add | change | del | show } [ NAME ]
[ mode { ipip | gre | sit } ] [ remote ADDR ] [ local ADDR ]
[ [i|o]seq ] [ [i|o]key KEY ] [ [i|o]csum ]
[ ttl TTL ] [ tos TOS ] [ nopmtudisc ] [ dev PHYS_DEV ]
Where: NAME := STRING
ADDR := { IP_ADDRESS | any }
TOS := { NUMBER | inherit }
TTL := { 1..255 | inherit }
KEY := { DOTTED_QUAD | NUMBER }
Linux Info
Binding Connections to Multiple Interfaces
curl_setopt($conn_ch1, CURLOPT_INTERFACE, $AA_RANDOM_INTERFACE);
217 static CURLcode bindlocal(struct connectdata *conn,
218 curl_socket_t sockfd)
219 {
220 #ifdef ENABLE_IPV6
221 char ipv6_addr[16];
222 #endif
223 struct SessionHandle *data = conn->data;
224 struct sockaddr_in me;
225 struct sockaddr *sock = NULL; /* bind to this address */
226 socklen_t socksize; /* size of the data sock points to */
227 unsigned short port = data->set.localport; /* use this port number, 0 for
228 "random" */
229 /* how many port numbers to try to bind to, increasing one at a time */
230 int portnum = data->set.localportrange;
231
232 /*************************************************************
233 * Select device to bind socket to
234 *************************************************************/
235 if (data->set.device && (strlen(data->set.device)<255) ) {
236 struct Curl_dns_entry *h=NULL;
237 char myhost[256] = "";
238 in_addr_t in;
239 int rc;
240 bool was_iface = FALSE;
241 int in6 = -1;
242
243 /* First check if the given name is an IP address */
244 in=inet_addr(data->set.device);
245
246 if((in == CURL_INADDR_NONE) &&
247 Curl_if2ip(data->set.device, myhost, sizeof(myhost))) {
248 /*
249 * We now have the numerical IPv4-style x.y.z.w in the 'myhost' buffer
250 */
251 rc = Curl_resolv(conn, myhost, 0, &h);
252 if(rc == CURLRESOLV_PENDING)
253 (void)Curl_wait_for_resolv(conn, &h);
254
255 if(h) {
256 was_iface = TRUE;
257 Curl_resolv_unlock(data, h);
258 }
259 }
260
261 if(!was_iface) {
262 /*
263 * This was not an interface, resolve the name as a host name
264 * or IP number
265 */
266 rc = Curl_resolv(conn, data->set.device, 0, &h);
267 if(rc == CURLRESOLV_PENDING)
268 (void)Curl_wait_for_resolv(conn, &h);
269
270 if(h) {
271 if(in == CURL_INADDR_NONE)
272 /* convert the resolved address, sizeof myhost >= INET_ADDRSTRLEN */
273 Curl_inet_ntop(h->addr->ai_addr->sa_family,
274 &((struct sockaddr_in*)h->addr->ai_addr)->sin_addr,
275 myhost, sizeof myhost);
276 else
277 /* we know data->set.device is shorter than the myhost array */
278 strcpy(myhost, data->set.device);
279 Curl_resolv_unlock(data, h);
280 }
281 }
282
283 if(! *myhost) {
284 /* need to fix this
285 h=Curl_gethost(data,
286 getmyhost(*myhost,sizeof(myhost)),
287 hostent_buf,
288 sizeof(hostent_buf));
289 */
290 failf(data, "Couldn't bind to '%s'", data->set.device);
291 return CURLE_HTTP_PORT_FAILED;
292 }
293
294 infof(data, "Bind local address to %s\n", myhost);
295
296 #ifdef SO_BINDTODEVICE
297 /* I am not sure any other OSs than Linux that provide this feature, and
298 * at the least I cannot test. --Ben
299 *
300 * This feature allows one to tightly bind the local socket to a
301 * particular interface. This will force even requests to other local
302 * interfaces to go out the external interface.
303 *
304 */
305 if (was_iface) {
306 /* Only bind to the interface when specified as interface, not just as a
307 * hostname or ip address.
308 */
309 if (setsockopt(sockfd, SOL_SOCKET, SO_BINDTODEVICE,
310 data->set.device, strlen(data->set.device)+1) != 0) {
311 /* printf("Failed to BINDTODEVICE, socket: %d device: %s error: %s\n",
312 sockfd, data->set.device, Curl_strerror(SOCKERRNO)); */
313 infof(data, "SO_BINDTODEVICE %s failed\n",
314 data->set.device);
315 /* This is typically "errno 1, error: Operation not permitted" if
316 you're not running as root or another suitable privileged user */
317 }
318 }
319 #endif
320
321 in=inet_addr(myhost);
322
323 #ifdef ENABLE_IPV6
324 in6 = Curl_inet_pton (AF_INET6, myhost, (void *)&ipv6_addr);
325 #endif
326 if (CURL_INADDR_NONE == in && -1 == in6) {
327 failf(data,"couldn't find my own IP address (%s)", myhost);
328 return CURLE_HTTP_PORT_FAILED;
329 } /* end of inet_addr */
330
331 if ( h ) {
332 Curl_addrinfo *addr = h->addr;
333 sock = addr->ai_addr;
334 socksize = addr->ai_addrlen;
335 }
336 else
337 return CURLE_HTTP_PORT_FAILED;
338
339 }
340 else if(port) {
341 /* if a local port number is requested but no local IP, extract the
342 address from the socket */
343 memset(&me, 0, sizeof(struct sockaddr));
344 me.sin_family = AF_INET;
345 me.sin_addr.s_addr = INADDR_ANY;
346
347 sock = (struct sockaddr *)&me;
348 socksize = sizeof(struct sockaddr);
349
350 }
351 else
352 /* no local kind of binding was requested */
353 return CURLE_OK;
354
355 do {
356
357 /* Set port number to bind to, 0 makes the system pick one */
358 if(sock->sa_family == AF_INET)
359 ((struct sockaddr_in *)sock)->sin_port = htons(port);
360 #ifdef ENABLE_IPV6
361 else
362 ((struct sockaddr_in6 *)sock)->sin6_port = htons(port);
363 #endif
364
365 if( bind(sockfd, sock, socksize) >= 0) {
366 /* we succeeded to bind */
367 struct Curl_sockaddr_storage add;
368 socklen_t size;
369
370 size = sizeof(add);
371 if(getsockname(sockfd, (struct sockaddr *) &add, &size) < 0) {
372 failf(data, "getsockname() failed");
373 return CURLE_HTTP_PORT_FAILED;
374 }
375 /* We re-use/clobber the port variable here below */
376 if(((struct sockaddr *)&add)->sa_family == AF_INET)
377 port = ntohs(((struct sockaddr_in *)&add)->sin_port);
378 #ifdef ENABLE_IPV6
379 else
380 port = ntohs(((struct sockaddr_in6 *)&add)->sin6_port);
381 #endif
382 infof(data, "Local port: %d\n", port);
383 return CURLE_OK;
384 }
385 if(--portnum > 0) {
386 infof(data, "Bind to local port %d failed, trying next\n", port);
387 port++; /* try next port */
388 }
389 else
390 break;
391 } while(1);
392
393 data->state.os_errno = SOCKERRNO;
394 failf(data, "bind failure: %s",
395 Curl_strerror(conn, data->state.os_errno));
396 return CURLE_HTTP_PORT_FAILED;
397
398 }