Divya's Blog

Archive for the ‘technology’ Category


Application Management in relationship to busi...

IT-Business management systems

  • Self managing computing systems have the ability to manage themselves and dynamically adapt to change in accordance with business policies and objectives. Self-managing systems can perform management activities based on situations they observe or sense in the IT environment.
  • Self managing computing is the self-management of e-business infrastructure, balancing what is managed by the IT professional and what is managed by the system. It is the evolution of e-business. 

Systems with self-managing components reduce the cost of owning and operating computer systems. Self-managing systems can perform management activities based on situations they observe or sense in the IT environment. Rather than IT professionals initiating management activities, the system observes something about itself and acts accordingly. This allows the IT professional to focus on high-value tasks while the technology manages the more mundane operations.
IT infrastructure components take on the following characteristics:
• self-configuring
• self-healing
• self-optimizing  and self -protecting.

1 .Self-configuring :Systems adapt automatically to dynamically changing environments. When hardware and software systems have the ability to define themselves “on-the fly,” they are self-configuring. This aspect of self-managing means that new features, software, and servers can be dynamically added to the enterprise infrastructure with no disruption Systems adapt automatically to dynamically changing environments.

2. Self- healing: Systems discover, diagnose, and react to disruptions. For a system to be self-healing, it must be able to recover from a failed component by first detecting and isolating the failed component, taking it off line, fixing or isolating the failed component, and reintroducing the fixed or replacement component into service without any apparent application disruption. Systems will need to predict problems and take actions to prevent the failure from having an impact on applications.

3 .Self-optimizing :Systems monitor and tune resources automatically. Self-optimization requires hardware and software systems to efficiently maximize resource utilization to meet end-user needs without human  intervention.

4 .Self-protecting :Systems anticipate, detect, identify, and protect themselves from attacks from anywhere. Self-protecting systems must have the ability to define and manage user access to all computing resources within the enterprise, to protect against unauthorized resource access, to detect intrusions and report and prevent these activities as they occur, and to provide backup and recovery capabilities that are as secure as the original resource management systems. Systems will need to build on top of a number of core security technologies already available today. Capabilities must be provided to more easily understand and handle user identities in various contexts, removing the burden from administrator

 

 

 

Advertisements

Notepad (software)

Notepad ++

Notepad++ is a free (as in “free speech” and also as in “free beer”) source code editor and Notepad replacement that supports several languages. Running in the MS Windows environment, its use is governed by GPL License.

Based on the powerful editing component Scintilla, Notepad++ is written in C++ and uses pure Win32 API and STL which ensures a higher execution speed and smaller program size. By optimizing as many routines as possible without losing user friendliness, Notepad++ is trying to reduce the world carbon dioxide emissions. When using less CPU power, the PC can throttle down and reduce power consumption, resulting in a greener environment.

FEATURES :

Latest Version:Notepad v 5.9.2

Go to Following link to download:

http://notepad-plus-plus.org/release/5.9.2


PhotonQ-The Darth Vader Artificial Intelligenc...

Photon-Q Artificial Intelligence Network

Definition:

“A piece of software which performs a given task using information gleaned from its environment to act in a suitable manner so as to complete the task successfully. The software should be able to adapt itself based on changes occurring in its environment, so that a change in circumstances will still yield the intended result.”

“An agent is a software thing that knows how to do things that you could probably do yourself if you had the time.”

                                                Ted Selker of the IBM Almaden Research Centre

Introduction:

Intelligent software agents are a popular research object these days in such fields as psychology,  sociology and computer science. Agents are most intensely studied in the discipline of Artificial  Intelligence (AI)

Recent Applications:

The current applications of agents are of a rather experimental and ad hoc nature. Besides universities and research centres a considerable number of companies, like  IBM and  Microsoft, are doing research in the area of agents. To make sure their research projects will receive further financing, many researchers & developers of such companies (but this is also applicable on other parties, even non-commercial ones) are nowadays focusing on rather basic agent applications, as these lead to demonstrable results within a definite time.

Examples of this kind of agent applications are:

  •  Agents who partially or fully handle someone’s e-mail.
  •  Agents who filter and/or search through (Usenet) news articles looking for information that may be interesting for a user.
  • Agents that make arrangements for gatherings such as a meeting, for instance by means of lists provided by the persons attending or based on the information (appointments) in the electronic agenda of every single participant.

 Future Areas of Application:

  1. Systems and Network Management
  2. Mobile Access / Management
  3.  Mail and Messaging
  4. Information Access and Management
  5. Collaboration
  6. Workflow and Administrative Management
  7. Electronic Commerce
  8.  Adaptive User Interfaces
Search Engine Application:
  • Agents are capable of searching information  more intelligently, for instance because tools  (such as a thesaurus) enable them to search on  related terms as well, or even on concepts. Agents will also use these tools to fine-tune, or  even correct user queries (on the basis of a  user model, or other user information).
  • Individual user agents can create their own  knowledge base about available information sources on the Internet, which is updated and expanded after every search. When information (i.e. documents) have moved to  another location, agents will be able to find them, and update their knowledge base  accordingly.
  • Furthermore, in the future agents will be able to  communicate and co-operate with other agents This will enable them to perform tasks, such as  information searches, quicker and more  efficient, reducing network traffic. They will  also be able to perform tasks (e.g. searches)  directly at the source/service, leading to a  further decrease of network traffic

PHANTOM haptic device made by SensAble Technol...

HAPTIC INTERACTION

INTRODUCTION:

A haptic interface is a force reflecting device which allows a user to touch, feel, manipulate, create and/or alter simulated 3D objects in a virtual environment” haptic. (Adjective Grk: haptein) having to do with the sense of touch; tactile haptics = touch, tactile, force-feedback, using force/resistance, texture, heat, vibration.

PRINCIPLE BEHIND IT:

Force display technology works by using mechanical actuators to apply forces to the user. By simulating the physics of the user’s virtual world, we can compute these forces in real-time, and then send them to the actuators so that the user feels them.

POTENTIAL BENEFITS:

  •    Reduction in fatigue
  •   Increase in productivity and comfort
  •   Decreased learning times
  •  Large reductions in manipulation errors.

PRODUCTS FROM HAPTIC INTERACTION

Device turns computer into means for touching virtual objects

Like a high-tech version of a child’s interactive “touch and feel” book, this computer interface lets users feel a virtual object to learn its shape, and whether it’s rough as sandpaper, hard as rock or hot as fire. Plenty, if you’re in the business of creating new ways for humans and computers to interact. One of the stickiest problems in developing advanced human-computer interfaces is finding a way to simulate touch. Without it, virtual reality isn’t very real.

Now a group of researchers at MIT‘s Artificial Intelligence Laboratory have found a way to communicate the tactile sense of a virtual object — its shape, texture, temperature, weight and rigidity — and let the user change those characteristics through a device called the PHANToM haptic interface

 

iSCSI

Posted on: June 4, 2011


SCSI terminator with top cover removed.

SCSI TERMINATOR

SCSI

  • SCSI (Small Computer Systems Interface) has been a standard client-server protocol for decades it is used to enable computers to communicate with storage devices. As system interconnects move from the classical bus structure to a network structure, SCSI has to be mapped to network transport protocols.
  • Today’s IP Gigabit networks meet the performance requirements of to seamlessly transport SCSI commands between application servers to centralized storage.

iSCSI

  • The iSCSI protocol enables the transfer of SCSI packets over a TCP/IP (Ethernet)network. iSCSI is an interoperable solution which enables the use of existing TCP/IP infrastructure and addresses distance limitations (iSCSI can also be used over the Internet).
  • This means the disk drives in your SAN are presented over your existing Ethernet network to server applications as though the disks are local to your physical server hardware.

Enabling Server Virtualization with Shared Storage -IT Project

Implementing a reliable iSCSI SAN is becoming an industry best practice for administrators of VMware ESX and ESXi, Microsoft Hyper-V and Citrix XenServer. Storage virtualization is the logical next step to complete a virtual environment. Combining iSCSI SANs and system virtualization allows IT organization to improve overall levels of performance, lower overall costs and increase the levels of system and data protection.

By installing an iSCSI SAN, system administrators can provide powerful, shared storage for their VMs featuring:

  • CDP & Snapshots. Point-in-time Snapshots. By combining Continuous Data Protection(CDP) with instant snapshot technology, StarWind allows you to recover from a failure in moments. An administrator can access snapshot copies to recover individual files or folders from the volume, or rollback an entire volume to a point prior to failure.
  • Mirroring & Replication. StarWind’s Asynchronous Replication and Synchronous Mirroring enable advanced replication between a primary SAN and a remote site. Asynchronous Replication and Synchronous Mirroring is used for centralized backup and disaster recovery and can be setup on a per-volume basis. Remote copies placed on a re-occurring schedule allow a customer to achieve point-in-time replication of the data between sites located in different server rooms, satellite locations, buildings across campus or even metro areas.
  • Thin Provisioning. StarWind Thin Provisioning feature offers very efficient disk utilization. Administrators no longer need to predict how much space they will need or to purchase disks ahead of time. For maximum SAN efficiency, StarWind Thin Provisioning allocates only as much space as is required for data being written on that volume.

StarWind

  • StarWind is an iSCSI SAN software that turns any industry-standard server into a highly reliable and highly available enterprise-class SAN or centralized iSCSI storage.
  • StarWind creates a more efficient virtual environment. Its iSCSI SAN solution is highly affordable, helping you slash operating costs. Highly reliable helping you protect your entire virtual data center

Image representing VeriSign as depicted in Cru...

VeriSign Identity Protection

  •  VeriSign Identity Protection (VIP) Authentication Service helps companies to mitigate risk and maintain compliance with a scalable, reliable Two-Factor Authentication platform delivered without the high cost of infrastructure and operations.
  • With VIP Authentication Service, the end user experiences a fast response and the assurance that their identity is protected by an added layer of security

A Scalable, Reliable Platform

Our flexible platform is highly available, scalable and reliable, leveraging VeriSign’s expertise in running on-demand, critical Internet infrastructure globally. With VIP, the end user’s identity information stays within your enterprise; only the security code and credential ID pass anonymously to VeriSign for validation.

A Convenient Choice of Credentials

  • VIP Authentication Service supports a range of OATH-compliant credential form factors to meet the diverse needs of end users. Enterprise customers who use VIP have immediate access to the most convenient and cost effective form factors available for employees, business partners and customers.
  • Freely available credentials for mobile handsets and PC desktops dramatically reduce the total cost of ownership for typical Two-Factor Authentication solutions. VeriSign also offers the most deployed and innovative hardware credentials including tokens and credit card-sized credentials.

Preferred for the Enterprise

End Users may use their VIP credential on any participating Web site that displays the VeriSign Identity Protection logo. VIP Network Members include eBay, PayPal, AOL and more.

VeriSign® Identity Protection (VIP) Access for Mobile turns a mobile phone into a two-factor authentication security device

VIP service

How It Works

  • Most enterprise networks and externally facing Web sites require a username and password to identify you online. But usernames and passwords can be cracked, hacked and faked. Your VIP Access for Mobile verifies your identity by generating a unique security code or one-time password each time you use it.
  • Use your VIP Access for Mobile to protect your identity, financial assets, and privacy when you sign-in to your enterprise or leading Web sites like PayPal, eBay, AOL, and other Web sites displaying the VIP Network Member logo.

This is a photograph of the control room in th...

NASA

INTRODUCTION:
Rainfinity‘s technology originated in a research project at the California Institute of Technology (Caltech), in collaboration with NASA’s Jet Propulsion Laboratory and the Defense Advanced Research Projects Agency (DARPA).

  • The name of the original research project was RAIN, which stands for Reliable Array of Independent Nodes.
  • The goal of the RAIN project was to identify key software building blocks for creating reliable distributed applications using off-the-shelf hardware.
  • The focus of the research was on high-performance, fault-tolerant and portable clustering technology for space-borne computing.

Two important assumptions were made, and these two assumptions reflect the differentiations between RAIN and a number of existing solutions both in the industry and in academia:

  • The most general share-nothing model is assumed.
  • There is no shared storage accessible from all computing nodes.
  • The only way for the computing nodes to share state is to communicate via a network.
  • This differentiates RAIN technology from existing back-end server clustering solutions such as   SUNcluster, HP MC Serviceguard or Microsoft Cluster Server.
  • The distributed application is not an isolated system.
  • The distributed protocols interact closely with existing networking protocols so that a RAIN cluster is  able to interact with the environment.
  • Specifically, technological modules were created to handle high-volume network-based transactions. This differentiates it from traditional distributed computing projects such as Beowulf
  • It became obvious that RAIN technology was well-suited for Internet applications. During the RAIN project, key components were built to fulfill this vision.
  • A patent was filed and granted for the RAIN technology.
  • Rainfinity was spun off from Caltech in 1998, and the company has exclusive intellectual property rights to the RAIN technology.
  • After the formation of the company, the RAIN technology has been further augmented, and additional patents have been filed.

The guiding concepts that shaped the architecture are as follows:
1. Network Applications : The architecture goals for clustering data network applications are different from clustering data     storage applications. Similar goals apply in the telecom environment that provides the Internet backbone infrastructure, due to the nature of applications and services being clustered.

2.Shared-Nothing: The shared-storage cluster is the most widely used for database and application servers that store persistent data on disks. This type of cluster typically focuses on the availability of the database or application service, rather than performance. Recovery from failover is generally slow, because restoring application access to disk-based data takes minutes or longer, not seconds. Telecom servers deployed at the edge of the network are often diskless, keeping data in memory for performance reasons, and tolerate low failover time. Therefore, a new type of share-nothing cluster with rapid failure detection and recovery is required. The only way for the shared-nothing cluster to share is to communicate via the network

3. Scalability :While the high-availability cluster focuses on recovery from unplanned and planned downtimes, this new type of cluster must also be able to maximize I/O performance by load balancing across multiple computing nodes. Linear scalability with network throughput is important. In order to maximize the total throughput, load load-balancing decisions must be made dynamically by measuring the current capacity of each computing node in real-time. Static hashing does not guarantee an even distribution of traffic.

4. Peer-to-Peer :A dispatcher-based, master-slave cluster architecture suffers from scalability by introducing a potential bottleneck. A peer-to-peer cluster architecture is more suitable for latency-sensitive data network applications processing shortlived sessions. A hybrid architecture should be considered to offset the need for more control over resource management. For example, a cluster can assign multiple authoritative computing nodes that process traffic in the round-robin order for each network interface that is clustered to reduce the overhead of traffic forwarding

 



		

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 6 other followers

Subscribe our Blog

Subscribe
CLUSTER MAPS Locations of visitors to this page

Tweets

Error: Twitter did not respond. Please wait a few minutes and refresh this page.