Divya's Blog


PhotonQ-The Darth Vader Artificial Intelligenc...

Photon-Q Artificial Intelligence Network

Definition:

“A piece of software which performs a given task using information gleaned from its environment to act in a suitable manner so as to complete the task successfully. The software should be able to adapt itself based on changes occurring in its environment, so that a change in circumstances will still yield the intended result.”

“An agent is a software thing that knows how to do things that you could probably do yourself if you had the time.”

                                                Ted Selker of the IBM Almaden Research Centre

Introduction:

Intelligent software agents are a popular research object these days in such fields as psychology,  sociology and computer science. Agents are most intensely studied in the discipline of Artificial  Intelligence (AI)

Recent Applications:

The current applications of agents are of a rather experimental and ad hoc nature. Besides universities and research centres a considerable number of companies, like  IBM and  Microsoft, are doing research in the area of agents. To make sure their research projects will receive further financing, many researchers & developers of such companies (but this is also applicable on other parties, even non-commercial ones) are nowadays focusing on rather basic agent applications, as these lead to demonstrable results within a definite time.

Examples of this kind of agent applications are:

  •  Agents who partially or fully handle someone’s e-mail.
  •  Agents who filter and/or search through (Usenet) news articles looking for information that may be interesting for a user.
  • Agents that make arrangements for gatherings such as a meeting, for instance by means of lists provided by the persons attending or based on the information (appointments) in the electronic agenda of every single participant.

 Future Areas of Application:

  1. Systems and Network Management
  2. Mobile Access / Management
  3.  Mail and Messaging
  4. Information Access and Management
  5. Collaboration
  6. Workflow and Administrative Management
  7. Electronic Commerce
  8.  Adaptive User Interfaces
Search Engine Application:
  • Agents are capable of searching information  more intelligently, for instance because tools  (such as a thesaurus) enable them to search on  related terms as well, or even on concepts. Agents will also use these tools to fine-tune, or  even correct user queries (on the basis of a  user model, or other user information).
  • Individual user agents can create their own  knowledge base about available information sources on the Internet, which is updated and expanded after every search. When information (i.e. documents) have moved to  another location, agents will be able to find them, and update their knowledge base  accordingly.
  • Furthermore, in the future agents will be able to  communicate and co-operate with other agents This will enable them to perform tasks, such as  information searches, quicker and more  efficient, reducing network traffic. They will  also be able to perform tasks (e.g. searches)  directly at the source/service, leading to a  further decrease of network traffic
Advertisements

PHANTOM haptic device made by SensAble Technol...

HAPTIC INTERACTION

INTRODUCTION:

A haptic interface is a force reflecting device which allows a user to touch, feel, manipulate, create and/or alter simulated 3D objects in a virtual environment” haptic. (Adjective Grk: haptein) having to do with the sense of touch; tactile haptics = touch, tactile, force-feedback, using force/resistance, texture, heat, vibration.

PRINCIPLE BEHIND IT:

Force display technology works by using mechanical actuators to apply forces to the user. By simulating the physics of the user’s virtual world, we can compute these forces in real-time, and then send them to the actuators so that the user feels them.

POTENTIAL BENEFITS:

  •    Reduction in fatigue
  •   Increase in productivity and comfort
  •   Decreased learning times
  •  Large reductions in manipulation errors.

PRODUCTS FROM HAPTIC INTERACTION

Device turns computer into means for touching virtual objects

Like a high-tech version of a child’s interactive “touch and feel” book, this computer interface lets users feel a virtual object to learn its shape, and whether it’s rough as sandpaper, hard as rock or hot as fire. Plenty, if you’re in the business of creating new ways for humans and computers to interact. One of the stickiest problems in developing advanced human-computer interfaces is finding a way to simulate touch. Without it, virtual reality isn’t very real.

Now a group of researchers at MIT‘s Artificial Intelligence Laboratory have found a way to communicate the tactile sense of a virtual object — its shape, texture, temperature, weight and rigidity — and let the user change those characteristics through a device called the PHANToM haptic interface

 

iSCSI

Posted on: June 4, 2011


SCSI terminator with top cover removed.

SCSI TERMINATOR

SCSI

  • SCSI (Small Computer Systems Interface) has been a standard client-server protocol for decades it is used to enable computers to communicate with storage devices. As system interconnects move from the classical bus structure to a network structure, SCSI has to be mapped to network transport protocols.
  • Today’s IP Gigabit networks meet the performance requirements of to seamlessly transport SCSI commands between application servers to centralized storage.

iSCSI

  • The iSCSI protocol enables the transfer of SCSI packets over a TCP/IP (Ethernet)network. iSCSI is an interoperable solution which enables the use of existing TCP/IP infrastructure and addresses distance limitations (iSCSI can also be used over the Internet).
  • This means the disk drives in your SAN are presented over your existing Ethernet network to server applications as though the disks are local to your physical server hardware.

Enabling Server Virtualization with Shared Storage -IT Project

Implementing a reliable iSCSI SAN is becoming an industry best practice for administrators of VMware ESX and ESXi, Microsoft Hyper-V and Citrix XenServer. Storage virtualization is the logical next step to complete a virtual environment. Combining iSCSI SANs and system virtualization allows IT organization to improve overall levels of performance, lower overall costs and increase the levels of system and data protection.

By installing an iSCSI SAN, system administrators can provide powerful, shared storage for their VMs featuring:

  • CDP & Snapshots. Point-in-time Snapshots. By combining Continuous Data Protection(CDP) with instant snapshot technology, StarWind allows you to recover from a failure in moments. An administrator can access snapshot copies to recover individual files or folders from the volume, or rollback an entire volume to a point prior to failure.
  • Mirroring & Replication. StarWind’s Asynchronous Replication and Synchronous Mirroring enable advanced replication between a primary SAN and a remote site. Asynchronous Replication and Synchronous Mirroring is used for centralized backup and disaster recovery and can be setup on a per-volume basis. Remote copies placed on a re-occurring schedule allow a customer to achieve point-in-time replication of the data between sites located in different server rooms, satellite locations, buildings across campus or even metro areas.
  • Thin Provisioning. StarWind Thin Provisioning feature offers very efficient disk utilization. Administrators no longer need to predict how much space they will need or to purchase disks ahead of time. For maximum SAN efficiency, StarWind Thin Provisioning allocates only as much space as is required for data being written on that volume.

StarWind

  • StarWind is an iSCSI SAN software that turns any industry-standard server into a highly reliable and highly available enterprise-class SAN or centralized iSCSI storage.
  • StarWind creates a more efficient virtual environment. Its iSCSI SAN solution is highly affordable, helping you slash operating costs. Highly reliable helping you protect your entire virtual data center

Image representing VeriSign as depicted in Cru...

VeriSign Identity Protection

  •  VeriSign Identity Protection (VIP) Authentication Service helps companies to mitigate risk and maintain compliance with a scalable, reliable Two-Factor Authentication platform delivered without the high cost of infrastructure and operations.
  • With VIP Authentication Service, the end user experiences a fast response and the assurance that their identity is protected by an added layer of security

A Scalable, Reliable Platform

Our flexible platform is highly available, scalable and reliable, leveraging VeriSign’s expertise in running on-demand, critical Internet infrastructure globally. With VIP, the end user’s identity information stays within your enterprise; only the security code and credential ID pass anonymously to VeriSign for validation.

A Convenient Choice of Credentials

  • VIP Authentication Service supports a range of OATH-compliant credential form factors to meet the diverse needs of end users. Enterprise customers who use VIP have immediate access to the most convenient and cost effective form factors available for employees, business partners and customers.
  • Freely available credentials for mobile handsets and PC desktops dramatically reduce the total cost of ownership for typical Two-Factor Authentication solutions. VeriSign also offers the most deployed and innovative hardware credentials including tokens and credit card-sized credentials.

Preferred for the Enterprise

End Users may use their VIP credential on any participating Web site that displays the VeriSign Identity Protection logo. VIP Network Members include eBay, PayPal, AOL and more.

VeriSign® Identity Protection (VIP) Access for Mobile turns a mobile phone into a two-factor authentication security device

VIP service

How It Works

  • Most enterprise networks and externally facing Web sites require a username and password to identify you online. But usernames and passwords can be cracked, hacked and faked. Your VIP Access for Mobile verifies your identity by generating a unique security code or one-time password each time you use it.
  • Use your VIP Access for Mobile to protect your identity, financial assets, and privacy when you sign-in to your enterprise or leading Web sites like PayPal, eBay, AOL, and other Web sites displaying the VIP Network Member logo.

This is a photograph of the control room in th...

NASA

INTRODUCTION:
Rainfinity‘s technology originated in a research project at the California Institute of Technology (Caltech), in collaboration with NASA’s Jet Propulsion Laboratory and the Defense Advanced Research Projects Agency (DARPA).

  • The name of the original research project was RAIN, which stands for Reliable Array of Independent Nodes.
  • The goal of the RAIN project was to identify key software building blocks for creating reliable distributed applications using off-the-shelf hardware.
  • The focus of the research was on high-performance, fault-tolerant and portable clustering technology for space-borne computing.

Two important assumptions were made, and these two assumptions reflect the differentiations between RAIN and a number of existing solutions both in the industry and in academia:

  • The most general share-nothing model is assumed.
  • There is no shared storage accessible from all computing nodes.
  • The only way for the computing nodes to share state is to communicate via a network.
  • This differentiates RAIN technology from existing back-end server clustering solutions such as   SUNcluster, HP MC Serviceguard or Microsoft Cluster Server.
  • The distributed application is not an isolated system.
  • The distributed protocols interact closely with existing networking protocols so that a RAIN cluster is  able to interact with the environment.
  • Specifically, technological modules were created to handle high-volume network-based transactions. This differentiates it from traditional distributed computing projects such as Beowulf
  • It became obvious that RAIN technology was well-suited for Internet applications. During the RAIN project, key components were built to fulfill this vision.
  • A patent was filed and granted for the RAIN technology.
  • Rainfinity was spun off from Caltech in 1998, and the company has exclusive intellectual property rights to the RAIN technology.
  • After the formation of the company, the RAIN technology has been further augmented, and additional patents have been filed.

The guiding concepts that shaped the architecture are as follows:
1. Network Applications : The architecture goals for clustering data network applications are different from clustering data     storage applications. Similar goals apply in the telecom environment that provides the Internet backbone infrastructure, due to the nature of applications and services being clustered.

2.Shared-Nothing: The shared-storage cluster is the most widely used for database and application servers that store persistent data on disks. This type of cluster typically focuses on the availability of the database or application service, rather than performance. Recovery from failover is generally slow, because restoring application access to disk-based data takes minutes or longer, not seconds. Telecom servers deployed at the edge of the network are often diskless, keeping data in memory for performance reasons, and tolerate low failover time. Therefore, a new type of share-nothing cluster with rapid failure detection and recovery is required. The only way for the shared-nothing cluster to share is to communicate via the network

3. Scalability :While the high-availability cluster focuses on recovery from unplanned and planned downtimes, this new type of cluster must also be able to maximize I/O performance by load balancing across multiple computing nodes. Linear scalability with network throughput is important. In order to maximize the total throughput, load load-balancing decisions must be made dynamically by measuring the current capacity of each computing node in real-time. Static hashing does not guarantee an even distribution of traffic.

4. Peer-to-Peer :A dispatcher-based, master-slave cluster architecture suffers from scalability by introducing a potential bottleneck. A peer-to-peer cluster architecture is more suitable for latency-sensitive data network applications processing shortlived sessions. A hybrid architecture should be considered to offset the need for more control over resource management. For example, a cluster can assign multiple authoritative computing nodes that process traffic in the round-robin order for each network interface that is clustered to reduce the overhead of traffic forwarding

 



		

Video conference session between SOVA and OCAD...

Image via Wikipedia

  • Nefsis is the latest video conferencing online service designed for virtual work-spaces and business-to-business online meetings.
  • Nefsis is easy to use, takes advantage of PCs and networks you already have, and represents a realistic, immediately available option to reduce your carbon footprint starting right this moment.
  •  Nefsis dramatically simplifies what used to be a complicated process – video conferencing. Today, even an ordinary webcam on any Pentium PC produces video quality on par with boardroom equipment sold a few years ago.
  • A Logitech 9000 plug-and-play USB webcam produces stunning, full-screen video using nominal bandwidth.
  • The Nefsis server cloud can easily connect multiple employees and business users in the same meeting, even if they are in separate offices (behind separate firewalls and proxies).

Note:No  need of hardware or software installation for Nefsis free trial

Green Technology & Realistic Carbon Footprint Reduction

  •  On-demand video conferencing can reach virtually any business desktop worldwide. It is one of the few, realistic, green technologies available to businesses of all sizes, and it’s available now.
  • For example, by using Nefsis you and all your employees could telecommute one day per week.
  •  This alone would be a huge reduction in burning fuel, and employee cost-of-living relief. Another good example is cutting repeat travel to recurring meetings. This too is a realistic, immediately available greenhouse gas reduction technique.

MINIMUM SYSTEM REQUIREMENTS for NEFSIS

  1. Windows 2000, XP or later; for premium visual experience, Windows Vista & 7 are preferred
  2. Pentium 4 or newer
  3. 512MB RAM available
  4. Broadband Internet access (DSL, Cable Modem, T1, etc.)
  5. DirectShow 8.1 or later compatible webcam or video input (optional, to send video)
  6. DirectSound compatible audio input (optional, to send voice), and headset or speakers
  7. Microsoft® PowerPoint® 97 or later versions.

No proprietary hardware required.

Automatic CPU & Bandwidth Adjustment

  •  Nefsis uses industry standard web browsers and Internet connections that traverse firewalls and proxies for business users.
  •  Nefsis is also bandwidth and video peripheral agnostic. It allows you to use any connection type, without special routing requirements, and almost any Windows compatible video source.
  •  The quality and speed of your online meeting experience will be governed by the features you use, your PC and video hardware, and the bandwidth available to you in real-time.

NefsisFree trial

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 6 other followers

Subscribe our Blog

Subscribe
CLUSTER MAPS Locations of visitors to this page

Tweets

Error: Please make sure the Twitter account is public.