Thursday, November 12, 2015

StringTokenizer to split using multiple tokens in java

When trying to split a string over multiple tokens, the best thing available was thought to be the split. That way if we have to split over a string multiple times then we can split and then split over the splits, but what if there is more the be split over the parts, then we will have to split again.

Let me explain with an example. Say we have a URL that needs to be parsed.

URL: http://localhost/system/config/file/action/updateLogo?fileName=largetc.jpg&text=Company%20Logo
W

So if we have to read all the the path parameters and then also read the arguments then we have to do a split over the '/' parameter as below:

String[] urlTokens = urlFullPath.split("/");

Now let us say that we want to read the path arguments that are passed in the URL. Then we will have to do multiple splits over the path string to reach the desired variables. The below is one of the ways that it can be achieved.

String[] urlTokens = urlFullPath.split("/");
for (String urlPath : urlTokens) {
if(urlPath.contains("?")){
String[] argTokens = urlPath.split("\\?");
String[] argsParts = argTokens[1].split("&");
for (String args : argsParts) {
System.out.println("Args: " + args);
}
}
}

The output of the above piece of Java code would look something like:
Args: fileName=largetc.jpg
Args: text=Company%20Logo
That, as you can see, are a a lot of splits. Is there any better way. Probably there are a dozen ones. Here is one with StringTokenizer for easier manipulation.

There are two ways to specify a delimiter for a StringTokenizer object.

  1. In the StringTokenizer constructor pass the delimiter when initializing the object
  2. In nextToken() method pass the delimiter you are looking for next at runtime

Using StringTokenizer constructor

In the constructor one can pass the list of all the characters on which the string has to be split and accessed as:
StringTokenizer stringTokenizer = new StringTokenizer(theStringToParse, "/?&");
And then iterate over the tokens via a while loop as below:
while (stringTokenizer.hasMoreTokens())
System.out.println(stringTokenizer.nextToken());
Using nextToken method

To use by runtime parameter to the nextToken method, below is one of the ways:
while (stringTokenizer.hasMoreTokens())
System.out.println(stringTokenizer.nextToken("/"));
The token above can be changed as needed. One thing that needs to be kept in mind is that once a move to nextToken is done then the previous token is lost and one cannot do a trace back.

Split that URL
Combining all of the above here's a piece of code that can be used to extract the arguments passed in a URL as a key value pair.

StringTokenizer stringTokenizer = new StringTokenizer(theStringToParse);
// iterate through tokens of path parameters
while (stringTokenizer.hasMoreTokens()) {
String partOfToken = stringTokenizer.nextToken("?");
if (partOfToken.contains("=")) {
StringTokenizer tokenizeAgain = new StringTokenizer(partOfToken, "&");
while (tokenizeAgain.hasMoreTokens()) {
String argument = tokenizeAgain.nextToken();
String[] keyValueOfArgument = argument.split("=");
System.out.println("Key: " + keyValueOfArgument[0] + " and Value: " + keyValueOfArgument[1]);
}
}
}
The output of the above piece of code, when integrated with all the fans and flurries needed to execute would be, assuming the URL given at the beginning is the string to be parsed:
Key: fileName and Value: largetc.jpg
Key: text and Value: Company%20Logo 

Saturday, September 26, 2015

Is the net really neutral?


The recent debate on net-neutrality has become a major issue, so much that even Rahul Gandhi spoke on this topic. Further it has made it clear to general public that there are many intermediate players before any of the content from the internet is delivered to the end user. All these players are there with a business of their own that serves their personal interest.

A part of this not-so-simple structure of the internet is the government. Hence the Federal Communications Commission (FCC) was created to regulate interstate communications by radio, television, wire, satellite, and cable, as a watchdog of USA’s telecommunication industry. TRAI is the Indian counterpart working to regulate telecom services and tariffs in India.

There has been several comparisons of the internet to the electricity company for the purpose of debate on net-neutrality. This comparison is not exactly appropriate. Electricity does not have an inherent meaning within the flow of charged particles that inform as to what type of electric instrument is being used. For example, if the consumer is using a TV, or a fridge, or any other such specific electric appliance cannot be known by looking at the rate of flow of the charged particles. Other comparison are to the swing, where the swing movement is being controlled based on the payment made for the use of its functionality and to the buying fruits from a street vendor. These comparisons are rather more simplistic ignoring the more complex nuances associated with the internet.

A more appropriate comparison on this would be that of a postman. This is more appropriate because of two reasons. Currently the service providers work using multiplexing of data from different users over a single connecting. This is because the speed of underlying connections that are laid using optical communication systems are at rates of Giga-Bytes that are much higher than what an individual can use. For example the network speeds can go up to 100 GB/sec while the consumer will not be able to consume data at such a speed. Hence instead of giving the whole underlying network for one user at any specific time, the network providers put together data from different users in a single package(data over a network is sent in packets of data, one at a time) for transmission. Hence it is like a postman who is carrying letters in a bag and instead of carrying one person’s letters at a time the letters from multiple users are put in a bag and carried along at the same time.

The second reason that this analogy is closer is because the independent letters have the inherent information in it informing of the source and the destination and so allowing for a discrimination based on the meta-data, considering the content of the mail as the actual data. This model tries to model the complexity on a smaller scale.

Another argument being put forth is that of Internet Fast Lanes. This would allow the telecom operators to give preferential speed to companies with deep pockets while throttling speeds of others. This was shown by the comparisons with swing given previously in this article. But the internet already has a “fast lanes” because of CDNs.  These are distributed system of servers that provide content to the end-users with high performance and availability. In simple language when someone logs in to say facebook or gmail it is not guaranteed that all the data of their wall is coming from facebook server. Most of it may be coming from a CDN that is lying geographically very close to the user. That way the speed of delivery can be increased for quick response to the users. Akamai Technologies is the most popular CDN that delivers content to several privileged companies. As soon as a user opens or logs in to many websites one can see content being pulled from Akamai.com that will be displayed in a small pop-up at the bottom of the browser screen. Another technique being used are the peering connections where the content providers have direct connections to ISPs and run dedicated servers deep inside these ISPs to deliver content faster. Clearly these “fast lanes” are available to only those with deep pockets giving them an edge over the others with lighter purses.
In one of our previous article we pointed out that the internet is controlled by the gateway of the internet, i.e. the search engines. We had argued that if something is hidden in the 100th page of the search result, even if it is the most relevant accessible information, it is as good as non-existent and inaccessible. So the top positions for specific keywords if paid and occupied by companies, however irrelevant to it, then we can clearly see how deep pockets can tilt the internet to be not so neutral. So let’s face it.

The debate over net neutrality is not a recent one. It started in 2003 when the Columbia University media law professor Tim Wu coined the term. What we are still missing is a means of keeping the ISPs in check, else these debates will resurface in a newer form and at different levels. These debates also raise serious concerns that internet service providers are growing too powerful to influence a policy change. One way of exercising control is through common carrier law. These laws are necessary to define the framework in which the internet service providing companies have to function. There will definitely be opposition if it clamps down the current freedom being enjoyed by these companies hence it has to be done impartially by a third party including public opinion in their decisions.

The internet.org by facebook is being touted for being against net neutrality while Mark Zuckerberg defends it as being a plan to bring the internet to everyone. After the uproar, majorly in India, Zuckerberg expanded internet.org so as to allow developers to provide an app through Internet.org. Their argument is that the debate was over consumer choice and developer choice and they seem to have addressed them via their improved platform. Currently they are offering several projects that can work via their platform and an option to build more. Hence believingly the argument has moved further from the debate of providing lopsided access to the internet. Now the discussion has to be around as to which services and websites are or can be provided access via internet.org and who is to decide this.

Let us face it. The internet is not as much neutral after all and people with deep pockets will keep working to further their interests in further making work more in their favour. Let us get back to our analogy of the postman. The postal service is everyone’s necessary. There will soon be a day when the internet will also be such a necessary service, just like railroad, bus services or airlines. Hence what is being proposed is to regulate the telecommunications as common carriers.

Of course there will be arguments against the government gaining control on the network of networks arguing that it is the freedom that has provided incentive to the network providers to build the whole infrastructure that currently delivers internet. Another argument is that if the government holds the control then the whole process will be slowed down while these type of services need a faster response. The need of the hour is to further the debate on the common carrier and take a stand for the common good of the masses at large along with appropriate consideration to the involved parties.



Published Article Reference: http://thecompanion.in/is-net-really-neutral/

Wednesday, September 25, 2013

A Question of Privacy

The access of the digital information by the government for the purpose of security is not a new or isolated phenomenon. As pointed-out in my previous article on “Who controls the Internet?” it’s evident that the government has significant effect on the way that the network-of-networks is shaped. In that sense the government already has a control on the data to some extent. Direct access to the data is of course debatable based on many factors, including the results of such access.

Google, in its motion on September 5th, 2013 has made it clear that it is rather foolish to think that the information put online by the user is not accessed at all and kept completely private.

Personal Information online is a concept that may lead to contradicting connotations. If Personal Information is put online, then the owner of the information is no more the individual but it lies with the owner of the servers where the site is hosted and where this information is stored. So, it may no more be called as Personal Information but may be Server Information as that is what identifies the server and becomes its attributes, e.g. a Facebook server is called so, as it has the information of the users of Facebook. Hence, once the information is out of the hands of an individual, it is at the discretion of the company that owns the data to use it as it wills. Google, in its motion on September 5th, 2013 has made it clear that it is rather foolish to think that the information put online by the user is not accessed at all and kept completely private. Google scans all the emails to gather information on an individual’s online activity. For example, if one is sending mail about pizza party, googling about pizza varieties and also searching on Youtube for videos about making pizzas then it is not at all a surprise that based on location information the person gets advertisement about Pizza retails outlet, or Pizza product selling shops. What should Google do if the individual is doing the same with the keyword of “Bombs” or “Guns”?

The Foreign Intelligence Surveillance Act of 1978 (FISA) spells out the circumstances under which the government can eavesdrop for the purpose of gathering foreign intelligence.  Before Sep 11th, 2011, Bush administration’s Justice Department approved a program that may have relied on similar technology, but was far narrower in scope. Post Sep 11th the USA PATRIOT Act was passed under Bush’s Administration, primarily to include terrorism on behalf of groups that are not specifically backed by a foreign government. Further the Protect America Act of 2007 removed the warrant requirement for governmental surveillance of foreign intelligence targets. These developments point out to two things; that the surveillance activity has been present from a time longer than what we might know and that the monitoring activity is born out of the requirement of battling terrorism using all the available data. The best way to analyze the achievement of this goal is to look at the success of the whole program. It is hard to determine this for mainly two reasons. A direct co-relation between something that did not happen or maybe was prevented from happening to the act of collecting public information can only be established if it is attributed as such by either the ones who prevented the event or those who handle the data. Such information directly in the public domain can put the further success of such a program at risk as it is the discretion of the program that lead to its success in the first place as argued further.

When the British government decided to build its own Big Brother Database, there was a public debate. There was such high criticism that the plan had to be dropped for good (the British government does still have its counterpart of PRISM). United States, on the other hand had the bill passed under different circumstances when the whole nation was and still continues to live in the state of perpetual fear which is evident in the surveys conducted on the citizens of America. In answer to the question of whether “people should support their country even if the country is in the wrong,” more Americans said “Yes” than citizens of eight European countries and when asked whether “right or wrong should be a matter of personal conscience,” Americans came in next-to-last. Above results were found in 2003 by the International Social Survey Program. Further a debate of sorts on NSA’s data-collection efforts was discouraged quoting the reason: “If you tell our adversaries and enemies in the counter terrorism fight exactly how we conduct business, they are not going to do business the same ever again,”[SIC] by Mike Rogers, The Chairman of the House Intelligence Committee.
There are several checks put by the government for gathering data. A special court is designed to review the applications for surveillance, which is composed of 11 U.S. District Court judges selected by the chief justice of the U.S. Supreme Court. The downside is that this court has been recently giving permissions for the collections of millions of records and hence Verizon order sweeps up detailed information about millions of Americans in a single order. Another argument put forth against privacy infringement is that NSA collects only metadata of the call with the idea that when a person dials a number or sends an email, like the postal address which is visible to all, the dialed number or the “To” email is addressed is public and visible to all. Hence there is not harm in collecting metadata and collecting the actual data will be requiring a separate individual warrant. The counter; let us assume that a newspaper correspondent publishes a controversial article citing Internal Sources. Using the metadata of as to whom the correspondent was talking to over the phone, as to whom she has been communicating over email, it can be very easily pin-pointed as to who the Internal Source is, which again would lead to the invasion of privacy.

Incidental data collection is also quoted where the purpose is to actually collect the relevant information but in the process of reaching that information one has to collect all the available data and then sift through the data to gather the required information. Certainly the call and online activity of every Verizon customer or those using email etc… cannot be relevant to such investigations. It can instead be argued that the agency is collecting massive amounts of information, regardless of whether that information is relevant to national security. These concerns find more strength when we hear of news such as the confession, after multiple denials of Central Intelligence Agency (CIA) of Snooping on MIT professor, Noam Chomsky. Bilateral relations with other nations will also have to be looked into, as the Act extends on the American soil and so to the Servers that lie on American soil. Other countries like Australia are debating on whether to keep their Server Information on American soil. If a company is using the cloud service of an American company with servers hosted in America, then the data of that company is potentially liable for the scrutiny. The only way to overcome this problem is to build similar alternatives inside the borders of a nation, be it for email or cloud or online shopping.

The NSA can retain the data for up to 5 years and make use of “inadvertently acquired” domestic communications if they contain usable intelligence, information on criminal activity, threat of harm to people or property, are encrypted, or are believed to contain any information relevant to cyber security and the data that could potentially contain anyone’s details. What this means is that if the data is encrypted then US government can track, scrutinize and keep it for analysis and to decipher it; if the data is not encrypted then anyone can see it. Anything that goes over https is encrypted, be it our email or Facebook data which makes it eligible for collection, and if unencrypted, any Tom-Dick-Harry can see what is being transferred over the network. Sounds more like a chicken-egg problem.

What this means is that if the data is encrypted then US government can track, scrutinize and keep it for analysis and to decipher it; if the data is not encrypted then anyone can see it.
Is all the data really solving the problem or complicating it further is a question that needs deeper analysis. As of October 2012, nearly five million people held government security clearances to access classified information out of which, 1.4 million held top-secret clearances. More than a third of those with top-secret clearances are contractors. Booz Allen Hamilton is the strategy and technology consulting firm where Edward Snowden has worked which employs almost 25,000 people, 76% of whom have government clearances allowing them to handle sensitive national security information. This is necessary as analysis of such huge amount of data will definitely require massive algorithms, computing facilities and workforce but then that gives access to such sensitive information to a large set of people leading to a different security concern.

So, should we be concerned at all or not is up to everyone to decide collectively? 

The argument of “I am not a terrorist and so I have nothing to hide” holds no ground. Benjamin Franklin warned of the siren’s call for power by government officials when he observed that “those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety.” Moreover on reflection, among others, the main concern seems to be about power, yes literally power. Where would they get all the electricity to keep-alive such a large Data center which is being built by contractors with top-secret clearances at Bluffdale that sits in a bowl-shaped valley, in the shadow of Utah’s Wasatch Range to the east and the Oquirrh Mountains to the west. Combined with it is the requirement of the computational and algorithmic power? Would this eventually turn out to be a failed project just like the previous Trailblazer Project? Only time will decide.

References:

Shayana Kadidal (June 7, 2013), Obama Administration Continues Bush’s Unconstitutional Policies, http://www.usnews.com/debate-club/should-americans-be-worried-about-the-national-security-agencys-data-collection/obama-administration-continues-bushs-unconstitutional-policies.
Jonathan Turley (June 7, 2013), The Founding Fathers Rejected a System of Authoritarian Power, http://www.usnews.com/debate-club/should-americans-be-worried-about-the-national-security-agencys-data-collection/the-founding-fathers-rejected-a-system-of-authoritarian-power.
Alberto Gonzales (June 7, 2013), The Government Must Use All Available Technology to Protect Americans, http://www.usnews.com/debate-club/should-americans-be-worried-about-the-national-security-agencys-data-collection/alberto-gonzales-the-government-must-use-all-available-technology-to-protect-americans.
John Yoo (June 7, 2013), Government Data Collection Doesn’t Violate the Constitution, http://www.usnews.com/debate-club/should-americans-be-worried-about-the-national-security-agencys-data-collection/john-yoo-government-data-collection-doesnt-violate-the-constitution.
Washington Wire (August 9, 2013), NSA Data Debate: Glossary and Who’s Who, http://blogs.wsj.com/washwire/2013/08/09/nsa-data-debate-glossary-and-whos-who/.
Tom Gara (June 10, 2013), Booz Allen’s Top-Secret Workforce, http://blogs.wsj.com/corporate-intelligence/2013/06/10/booz-allens-top-secret-workforce/.
Glenn Greenwald and James Ball (June 20, 2013), The top secret rules that allow NSA to use US data without a warrant, http://www.theguardian.com/world/2013/jun/20/fisa-court-nsa-without-warrant.
Eyal Press (August 5, 2013), Whistleblower, Leaker, Traitor, Spy, http://www.nybooks.com/blogs/nyrblog/2013/aug/05/whistleblower-leaker-traitor-spy/.
M.S. on Democracy in America (Jun 11, 2013), Should the government know less than Google?, http://www.economist.com/blogs/democracyinamerica/2013/06/surveillance-0.
Kevin Drum (June 10, 2013), Why the NSA Surveillance Program Isn’t Like “The Wire”, http://www.motherjones.com/kevin-drum/2013/06/nsa-debate-we-should-focus-future-not-present.
Mike Masnick (June 18, 2013), Senator Lindsey Graham Defends NSA Surveillance By Arguing About Something Entirely Different, http://www.techdirt.com/articles/20130617/01573323504/senator-lindsey-graham-defends-nsa-surveillance-arguing-about-something-entirely-different.shtml.
Andy Greenberg (June 20, 2013), Leaked NSA Doc Says It Can Collect And Keep Your Encrypted Data As Long As It Takes To Crack It, http://www.forbes.com/sites/andygreenberg/2013/06/20/leaked-nsa-doc-says-it-can-collect-and-keep-your-encrypted-data-as-long-as-it-takes-to-crack-it/.
Adam Bender (June 12, 2013), PRISM revives data sovereignty arguments in Australia, http://www.computerworld.com.au/article/464445/prism_revives_data_sovereignty_arguments_australia/.
James Bamford (November 5, 2009), Who’s in Big Brother’s Database?, http://www.nybooks.com/articles/archives/2009/nov/05/whos-in-big-brothers-database/.
Brad Bannon (June 6, 2013), The Epitome of Executive Overreach, http://www.usnews.com/opinion/blogs/brad-bannon/2013/06/06/government-overreaches-with-verizon-phone-record-collecting.
Newzfirst (22 August, 2013), NSA collected thousands of Americans’ emails, http://newzfirst.com/web/guest/full-story/-/asset_publisher/Qd8l/content/nsa-collected-thousands-of-americans-emails.

Wikipedia References:

  1. Foreign Intelligence Surveillance Act, http://en.wikipedia.org/wiki/Foreign_Intelligence_Surveillance_Act
  2. Patriot Act, http://en.wikipedia.org/wiki/USA_PATRIOT_Act
  3. Protect America Act of 2007, http://en.wikipedia.org/wiki/Protect_America_Act_of_2007
  4. Foreign Intelligence Surveillance Act of 1978 Amendments Act of 2008, http://en.wikipedia.org/wiki/FISA_Amendments_Act_of_2008
  5. National Security Agency, http://en.wikipedia.org/wiki/National_Security_Agency
  6. PRISM (surveillance program), http://en.wikipedia.org/wiki/PRISM_%28surveillance_program%29

Sunday, July 14, 2013

Who Controls the Internet?

The latest growth of the internet as a new communication channel has allowed individuals to connect to the world and express themselves in an unprecedented manner. There are techno-enthusiasts who believe that this in an unrestrained space with unbridled freedom of speech. It has even been observed that a recent revolution has been attributed the effect of this space of free expression. Is this really such a space with utopian unlimited dimensions of self-expression? Though not comprehensive, here is an attempt to understand and give an alternative view of this controlled space.

A popular persistent perspective is the fact that one can access information about anything over the internet.  This is prevalent from the net-savvy people where they refer the internet for any required information, from directions to prescriptions. It has to be noted that the user is interested in the required information and not the source from which the information is derived. Instead of accessing information via the URI the relevant information is searched through popular search engines. As these become the gateway for the information, controlling them effectively controls the information access. The best example of this kind of control is found in China. There is actually list of blacklisted keywords using which a search cannot be done in the People’s Republic of China. All the search engines have to comply with these rules else they will be blocked from functioning inside the limits of that country. China also has a list of blocked sites and search engines avoid results from these websites. Google, for example, has one of its indexing servers placed inside China and tries to access sites. The sites that are not accessible are not indexed for Chinese search.

Further it is at the discretion of the search engines to rank the search results in the order which is convenient to them and hence if something is hidden in the 100th page of the search result, even if it is the most relevant accessible information, it is as good as non-existent and inaccessible. One example is that of a company website which never appeared on the top search results whatever the search keyword be and however relevant it was linked to the services provided by the company as the top positions for those keywords were paid and occupied by several other companies, however irrelevant.

Any URI has to be mapped to an IP address to locate the actual server where the website is physically hosted. This is managed by hierarchically maintained root name servers. The top-level root authority is the Internet Assigned Numbers Authority (IANA), which is a department operated by the Internet Corporation for Assigned Names and Numbers (ICANN) which functions under contract to the United States Department of Commerce (DOC). The Department of Commerce also provides an ongoing oversight function, whereby it verifies additions the changes made in the DNS root zone to ensure IANA complies with its policies. This structure provides the control of the internet naming to a single organization and mainly puts the control of the whole structure of the internet partially at its bidding.  This single point of control is necessary for the uniqueness of all the resources (websites) on the internationally shared internet and hence cannot be done away with easily.

A company called Sealand and HavenCo promised its clients of servers that were “physically secure from any legal action”. This was founded in 2000 which operated from Sealand, a self-declared sovereign principality that occupies a man-made former World War II defense facility located about six miles from the coast of Suffolk, southeast England. Arguing that it was not part of World Trade Organization, it claimed that its intellectual property law did not apply to them and so they had no restrictions on copyright for data hosted on their servers. Thus this mainland became a haven-land for pornography. To enforce control on such a set-up the government used the “intermediaries”.

The online transactions can be divided into three parties: source, intermediary, and target. The intermediary in the above case included the likes of the payment gateways and banks that processed the online payment transaction for these porn sites. These intermediaries were coerced and even forced not to process transactions from such illegal sites and thus throttling survival of HavenCo. So control over the internet is not only done by throttling the access through the network but also by controlling the available dependent elements of the business model followed by the rouge party. In November 2008, operations of HavenCo ceased without explanation. Surprisingly on February 18th, 2013, the Sealand Newsletter ran a brief announcement that HavenCo will be offering “new services in early 2013 to facilitate private communications and storage”. It will be interesting to see if HavenCo will accept the supremacy of international governmental control over the internet and adhere to the “World dictated” policies or try to push its own agenda in a new way and how will they be controlled this time around.

On 20th Nov, 2012, two Mumbai girls were arrested for their comments on a popular social networking site which was initiated after a complaint being received. The incident triggered a debate on the freedom of expression on social networking websites, with netizens denouncing the extreme measure taken by the cops. It has to be noted even these people are fully aware of the consequences of unrestrained speech even if it is over the internet. This has been regularly reflected in the evidences gathered in most of the terror cases where a plethora of evidence is in the form of softcopy, some of it attributed to the internet as the source. Hence even the most enthusiast person hoping to bring a revolution through blogs and comments is enough careful to say things that are considerably constitutionally correct. This psychological way of extending control over the internet by keeping a tab on the originator of the information existent in the real world than on the bits of data over the internet proclaims that one can put any information over the internet but one cannot get-away with it easily. This method is used to extend control internally within the borders of a nation and consciously or unconsciously followed by the citizens as an acceptable norm of regular behavior.

Access to sites inside a country is done through the routers which are physically present inside the borders of the nation. These become the control points for access to websites based on the policies of the nation. Hence we see internet divided among nations just like the globe divided by lines on the map. Like a person needs appropriate documents to cross borders, bits and bytes and all websites too need proper compliance to the nation’s policies to cross-over these routers to enter its borders creating a sense of bordered internet.

The bordered internet has lots of virtues along with all its vices. When a person is searching for information over the internet it becomes really helpful in providing the right information to the user if the location of the user is known. Say someone is looking for pizza outlet. By determining the location of the user a search on “pizza outlets” would give more and locally relevant information and filters out logically correct but practically useless ones. Similarly advertisements can be customized based on the location of the user determined by the router from which the page requests are coming from. Localization support can be provided like for example if a person accesses from Karnataka by default a website can render itself in Kannada. All these are the benefits of drawing borders on the visibly borderless internet.

Government intervention in the internet too has its own positive effects which are seen in many forms one of which is the support provided to online trading companies. During its initial years eBay faced several legal issues due the defaulting customers who used the company’s direct customer-to-customer business model to sell fraudulent objects, like one person sold his own soul on eBay. Many similar e-commerce websites use the national legal backbone to further their business over the internet where the customer is not physically present and hence settlement of the deal involves third-parties and sometimes even payment after delivery. The founder of eBay notes that most people are good, for the few fraudulent ones we need to regulate the complete business with appropriate laws.

The internet is not so completely a free and open space but is definitely controlled in ways that is not so very obvious. This control cannot be concluded as either being positive or negative in terms of its effects and its results. It is exactly like the government of the land: a necessary evil. In all senses with these view in mind information over the internet makes more sense in terms of its structure and what is shown or rather highlighted to a particular user.

Published Article Reference: http://thecompanion.in/who-controls-the-internet/