Divya's Blog

Archive for the ‘technology’ Category


Application Management in relationship to busi...

IT-Business management systems

  • Self managing computing systems have the ability to manage themselves and dynamically adapt to change in accordance with business policies and objectives. Self-managing systems can perform management activities based on situations they observe or sense in the IT environment.
  • Self managing computing is the self-management of e-business infrastructure, balancing what is managed by the IT professional and what is managed by the system. It is the evolution of e-business. 

Systems with self-managing components reduce the cost of owning and operating computer systems. Self-managing systems can perform management activities based on situations they observe or sense in the IT environment. Rather than IT professionals initiating management activities, the system observes something about itself and acts accordingly. This allows the IT professional to focus on high-value tasks while the technology manages the more mundane operations.
IT infrastructure components take on the following characteristics:
• self-configuring
• self-healing
• self-optimizing  and self -protecting.

1 .Self-configuring :Systems adapt automatically to dynamically changing environments. When hardware and software systems have the ability to define themselves “on-the fly,” they are self-configuring. This aspect of self-managing means that new features, software, and servers can be dynamically added to the enterprise infrastructure with no disruption Systems adapt automatically to dynamically changing environments.

2. Self- healing: Systems discover, diagnose, and react to disruptions. For a system to be self-healing, it must be able to recover from a failed component by first detecting and isolating the failed component, taking it off line, fixing or isolating the failed component, and reintroducing the fixed or replacement component into service without any apparent application disruption. Systems will need to predict problems and take actions to prevent the failure from having an impact on applications.

3 .Self-optimizing :Systems monitor and tune resources automatically. Self-optimization requires hardware and software systems to efficiently maximize resource utilization to meet end-user needs without human  intervention.

4 .Self-protecting :Systems anticipate, detect, identify, and protect themselves from attacks from anywhere. Self-protecting systems must have the ability to define and manage user access to all computing resources within the enterprise, to protect against unauthorized resource access, to detect intrusions and report and prevent these activities as they occur, and to provide backup and recovery capabilities that are as secure as the original resource management systems. Systems will need to build on top of a number of core security technologies already available today. Capabilities must be provided to more easily understand and handle user identities in various contexts, removing the burden from administrator

 

 

 


Notepad (software)

Notepad ++

Notepad++ is a free (as in “free speech” and also as in “free beer”) source code editor and Notepad replacement that supports several languages. Running in the MS Windows environment, its use is governed by GPL License.

Based on the powerful editing component Scintilla, Notepad++ is written in C++ and uses pure Win32 API and STL which ensures a higher execution speed and smaller program size. By optimizing as many routines as possible without losing user friendliness, Notepad++ is trying to reduce the world carbon dioxide emissions. When using less CPU power, the PC can throttle down and reduce power consumption, resulting in a greener environment.

FEATURES :

Latest Version:Notepad v 5.9.2

Go to Following link to download:

http://notepad-plus-plus.org/release/5.9.2


PhotonQ-The Darth Vader Artificial Intelligenc...

Photon-Q Artificial Intelligence Network

Definition:

“A piece of software which performs a given task using information gleaned from its environment to act in a suitable manner so as to complete the task successfully. The software should be able to adapt itself based on changes occurring in its environment, so that a change in circumstances will still yield the intended result.”

“An agent is a software thing that knows how to do things that you could probably do yourself if you had the time.”

                                                Ted Selker of the IBM Almaden Research Centre

Introduction:

Intelligent software agents are a popular research object these days in such fields as psychology,  sociology and computer science. Agents are most intensely studied in the discipline of Artificial  Intelligence (AI)

Recent Applications:

The current applications of agents are of a rather experimental and ad hoc nature. Besides universities and research centres a considerable number of companies, like  IBM and  Microsoft, are doing research in the area of agents. To make sure their research projects will receive further financing, many researchers & developers of such companies (but this is also applicable on other parties, even non-commercial ones) are nowadays focusing on rather basic agent applications, as these lead to demonstrable results within a definite time.

Examples of this kind of agent applications are:

  •  Agents who partially or fully handle someone’s e-mail.
  •  Agents who filter and/or search through (Usenet) news articles looking for information that may be interesting for a user.
  • Agents that make arrangements for gatherings such as a meeting, for instance by means of lists provided by the persons attending or based on the information (appointments) in the electronic agenda of every single participant.

 Future Areas of Application:

  1. Systems and Network Management
  2. Mobile Access / Management
  3.  Mail and Messaging
  4. Information Access and Management
  5. Collaboration
  6. Workflow and Administrative Management
  7. Electronic Commerce
  8.  Adaptive User Interfaces
Search Engine Application:
  • Agents are capable of searching information  more intelligently, for instance because tools  (such as a thesaurus) enable them to search on  related terms as well, or even on concepts. Agents will also use these tools to fine-tune, or  even correct user queries (on the basis of a  user model, or other user information).
  • Individual user agents can create their own  knowledge base about available information sources on the Internet, which is updated and expanded after every search. When information (i.e. documents) have moved to  another location, agents will be able to find them, and update their knowledge base  accordingly.
  • Furthermore, in the future agents will be able to  communicate and co-operate with other agents This will enable them to perform tasks, such as  information searches, quicker and more  efficient, reducing network traffic. They will  also be able to perform tasks (e.g. searches)  directly at the source/service, leading to a  further decrease of network traffic

PHANTOM haptic device made by SensAble Technol...

HAPTIC INTERACTION

INTRODUCTION:

A haptic interface is a force reflecting device which allows a user to touch, feel, manipulate, create and/or alter simulated 3D objects in a virtual environment” haptic. (Adjective Grk: haptein) having to do with the sense of touch; tactile haptics = touch, tactile, force-feedback, using force/resistance, texture, heat, vibration.

PRINCIPLE BEHIND IT:

Force display technology works by using mechanical actuators to apply forces to the user. By simulating the physics of the user’s virtual world, we can compute these forces in real-time, and then send them to the actuators so that the user feels them.

POTENTIAL BENEFITS:

  •    Reduction in fatigue
  •   Increase in productivity and comfort
  •   Decreased learning times
  •  Large reductions in manipulation errors.

PRODUCTS FROM HAPTIC INTERACTION

Device turns computer into means for touching virtual objects

Like a high-tech version of a child’s interactive “touch and feel” book, this computer interface lets users feel a virtual object to learn its shape, and whether it’s rough as sandpaper, hard as rock or hot as fire. Plenty, if you’re in the business of creating new ways for humans and computers to interact. One of the stickiest problems in developing advanced human-computer interfaces is finding a way to simulate touch. Without it, virtual reality isn’t very real.

Now a group of researchers at MIT‘s Artificial Intelligence Laboratory have found a way to communicate the tactile sense of a virtual object — its shape, texture, temperature, weight and rigidity — and let the user change those characteristics through a device called the PHANToM haptic interface

 

iSCSI

Posted on: June 4, 2011


SCSI terminator with top cover removed.

SCSI TERMINATOR

SCSI

  • SCSI (Small Computer Systems Interface) has been a standard client-server protocol for decades it is used to enable computers to communicate with storage devices. As system interconnects move from the classical bus structure to a network structure, SCSI has to be mapped to network transport protocols.
  • Today’s IP Gigabit networks meet the performance requirements of to seamlessly transport SCSI commands between application servers to centralized storage.

iSCSI

  • The iSCSI protocol enables the transfer of SCSI packets over a TCP/IP (Ethernet)network. iSCSI is an interoperable solution which enables the use of existing TCP/IP infrastructure and addresses distance limitations (iSCSI can also be used over the Internet).
  • This means the disk drives in your SAN are presented over your existing Ethernet network to server applications as though the disks are local to your physical server hardware.

Enabling Server Virtualization with Shared Storage -IT Project

Implementing a reliable iSCSI SAN is becoming an industry best practice for administrators of VMware ESX and ESXi, Microsoft Hyper-V and Citrix XenServer. Storage virtualization is the logical next step to complete a virtual environment. Combining iSCSI SANs and system virtualization allows IT organization to improve overall levels of performance, lower overall costs and increase the levels of system and data protection.

By installing an iSCSI SAN, system administrators can provide powerful, shared storage for their VMs featuring:

  • CDP & Snapshots. Point-in-time Snapshots. By combining Continuous Data Protection(CDP) with instant snapshot technology, StarWind allows you to recover from a failure in moments. An administrator can access snapshot copies to recover individual files or folders from the volume, or rollback an entire volume to a point prior to failure.
  • Mirroring & Replication. StarWind’s Asynchronous Replication and Synchronous Mirroring enable advanced replication between a primary SAN and a remote site. Asynchronous Replication and Synchronous Mirroring is used for centralized backup and disaster recovery and can be setup on a per-volume basis. Remote copies placed on a re-occurring schedule allow a customer to achieve point-in-time replication of the data between sites located in different server rooms, satellite locations, buildings across campus or even metro areas.
  • Thin Provisioning. StarWind Thin Provisioning feature offers very efficient disk utilization. Administrators no longer need to predict how much space they will need or to purchase disks ahead of time. For maximum SAN efficiency, StarWind Thin Provisioning allocates only as much space as is required for data being written on that volume.

StarWind

  • StarWind is an iSCSI SAN software that turns any industry-standard server into a highly reliable and highly available enterprise-class SAN or centralized iSCSI storage.
  • StarWind creates a more efficient virtual environment. Its iSCSI SAN solution is highly affordable, helping you slash operating costs. Highly reliable helping you protect your entire virtual data center

Image representing VeriSign as depicted in Cru...

VeriSign Identity Protection

  •  VeriSign Identity Protection (VIP) Authentication Service helps companies to mitigate risk and maintain compliance with a scalable, reliable Two-Factor Authentication platform delivered without the high cost of infrastructure and operations.
  • With VIP Authentication Service, the end user experiences a fast response and the assurance that their identity is protected by an added layer of security

A Scalable, Reliable Platform

Our flexible platform is highly available, scalable and reliable, leveraging VeriSign’s expertise in running on-demand, critical Internet infrastructure globally. With VIP, the end user’s identity information stays within your enterprise; only the security code and credential ID pass anonymously to VeriSign for validation.

A Convenient Choice of Credentials

  • VIP Authentication Service supports a range of OATH-compliant credential form factors to meet the diverse needs of end users. Enterprise customers who use VIP have immediate access to the most convenient and cost effective form factors available for employees, business partners and customers.
  • Freely available credentials for mobile handsets and PC desktops dramatically reduce the total cost of ownership for typical Two-Factor Authentication solutions. VeriSign also offers the most deployed and innovative hardware credentials including tokens and credit card-sized credentials.

Preferred for the Enterprise

End Users may use their VIP credential on any participating Web site that displays the VeriSign Identity Protection logo. VIP Network Members include eBay, PayPal, AOL and more.

VeriSign® Identity Protection (VIP) Access for Mobile turns a mobile phone into a two-factor authentication security device

VIP service

How It Works

  • Most enterprise networks and externally facing Web sites require a username and password to identify you online. But usernames and passwords can be cracked, hacked and faked. Your VIP Access for Mobile verifies your identity by generating a unique security code or one-time password each time you use it.
  • Use your VIP Access for Mobile to protect your identity, financial assets, and privacy when you sign-in to your enterprise or leading Web sites like PayPal, eBay, AOL, and other Web sites displaying the VIP Network Member logo.

This is a photograph of the control room in th...

NASA

INTRODUCTION:
Rainfinity‘s technology originated in a research project at the California Institute of Technology (Caltech), in collaboration with NASA’s Jet Propulsion Laboratory and the Defense Advanced Research Projects Agency (DARPA).

  • The name of the original research project was RAIN, which stands for Reliable Array of Independent Nodes.
  • The goal of the RAIN project was to identify key software building blocks for creating reliable distributed applications using off-the-shelf hardware.
  • The focus of the research was on high-performance, fault-tolerant and portable clustering technology for space-borne computing.

Two important assumptions were made, and these two assumptions reflect the differentiations between RAIN and a number of existing solutions both in the industry and in academia:

  • The most general share-nothing model is assumed.
  • There is no shared storage accessible from all computing nodes.
  • The only way for the computing nodes to share state is to communicate via a network.
  • This differentiates RAIN technology from existing back-end server clustering solutions such as   SUNcluster, HP MC Serviceguard or Microsoft Cluster Server.
  • The distributed application is not an isolated system.
  • The distributed protocols interact closely with existing networking protocols so that a RAIN cluster is  able to interact with the environment.
  • Specifically, technological modules were created to handle high-volume network-based transactions. This differentiates it from traditional distributed computing projects such as Beowulf
  • It became obvious that RAIN technology was well-suited for Internet applications. During the RAIN project, key components were built to fulfill this vision.
  • A patent was filed and granted for the RAIN technology.
  • Rainfinity was spun off from Caltech in 1998, and the company has exclusive intellectual property rights to the RAIN technology.
  • After the formation of the company, the RAIN technology has been further augmented, and additional patents have been filed.

The guiding concepts that shaped the architecture are as follows:
1. Network Applications : The architecture goals for clustering data network applications are different from clustering data     storage applications. Similar goals apply in the telecom environment that provides the Internet backbone infrastructure, due to the nature of applications and services being clustered.

2.Shared-Nothing: The shared-storage cluster is the most widely used for database and application servers that store persistent data on disks. This type of cluster typically focuses on the availability of the database or application service, rather than performance. Recovery from failover is generally slow, because restoring application access to disk-based data takes minutes or longer, not seconds. Telecom servers deployed at the edge of the network are often diskless, keeping data in memory for performance reasons, and tolerate low failover time. Therefore, a new type of share-nothing cluster with rapid failure detection and recovery is required. The only way for the shared-nothing cluster to share is to communicate via the network

3. Scalability :While the high-availability cluster focuses on recovery from unplanned and planned downtimes, this new type of cluster must also be able to maximize I/O performance by load balancing across multiple computing nodes. Linear scalability with network throughput is important. In order to maximize the total throughput, load load-balancing decisions must be made dynamically by measuring the current capacity of each computing node in real-time. Static hashing does not guarantee an even distribution of traffic.

4. Peer-to-Peer :A dispatcher-based, master-slave cluster architecture suffers from scalability by introducing a potential bottleneck. A peer-to-peer cluster architecture is more suitable for latency-sensitive data network applications processing shortlived sessions. A hybrid architecture should be considered to offset the need for more control over resource management. For example, a cluster can assign multiple authoritative computing nodes that process traffic in the round-robin order for each network interface that is clustered to reduce the overhead of traffic forwarding

 



		

Video conference session between SOVA and OCAD...

Image via Wikipedia

  • Nefsis is the latest video conferencing online service designed for virtual work-spaces and business-to-business online meetings.
  • Nefsis is easy to use, takes advantage of PCs and networks you already have, and represents a realistic, immediately available option to reduce your carbon footprint starting right this moment.
  •  Nefsis dramatically simplifies what used to be a complicated process – video conferencing. Today, even an ordinary webcam on any Pentium PC produces video quality on par with boardroom equipment sold a few years ago.
  • A Logitech 9000 plug-and-play USB webcam produces stunning, full-screen video using nominal bandwidth.
  • The Nefsis server cloud can easily connect multiple employees and business users in the same meeting, even if they are in separate offices (behind separate firewalls and proxies).

Note:No  need of hardware or software installation for Nefsis free trial

Green Technology & Realistic Carbon Footprint Reduction

  •  On-demand video conferencing can reach virtually any business desktop worldwide. It is one of the few, realistic, green technologies available to businesses of all sizes, and it’s available now.
  • For example, by using Nefsis you and all your employees could telecommute one day per week.
  •  This alone would be a huge reduction in burning fuel, and employee cost-of-living relief. Another good example is cutting repeat travel to recurring meetings. This too is a realistic, immediately available greenhouse gas reduction technique.

MINIMUM SYSTEM REQUIREMENTS for NEFSIS

  1. Windows 2000, XP or later; for premium visual experience, Windows Vista & 7 are preferred
  2. Pentium 4 or newer
  3. 512MB RAM available
  4. Broadband Internet access (DSL, Cable Modem, T1, etc.)
  5. DirectShow 8.1 or later compatible webcam or video input (optional, to send video)
  6. DirectSound compatible audio input (optional, to send voice), and headset or speakers
  7. Microsoft® PowerPoint® 97 or later versions.

No proprietary hardware required.

Automatic CPU & Bandwidth Adjustment

  •  Nefsis uses industry standard web browsers and Internet connections that traverse firewalls and proxies for business users.
  •  Nefsis is also bandwidth and video peripheral agnostic. It allows you to use any connection type, without special routing requirements, and almost any Windows compatible video source.
  •  The quality and speed of your online meeting experience will be governed by the features you use, your PC and video hardware, and the bandwidth available to you in real-time.

NefsisFree trial


Apache HTTP Server configuration screen

Image via Wikipedia

Introduction

  • Apache is a web server, that has it’s roots in the CERN web server.
  •  It is the most widely used web server on the Internet today, it can be integrated with content technologies like Zope, databases like MySQL and PostgreSQL and others (including Oracle and DB2) and the speed and versatility offered by Web rapid application development (RAD) languages like Personal Home Page (PHP).
  •  It is highly configurable, flexible and most importantly,it is open. This had lead to a host of development support on and around Apache.

External modules such as mod_rewrite, mod_perl and mod_php have added fist-fulls of functionality as well as improved the speed with which these requests can be serviced. It has, in no small part, played a role in the acceptance of the Linux platform in corporate organizations.

Apache versions

  • Apache comes in two basic flavors: Apache version 1.3.x and version 2.x.
  •  The configuration of these two differ quite substantially in some places

Basic server design

  • The Apache web server has been designed to be used in either a modular or non-modular way.
  •  In the former, modules are compiled separately from the core Apache server, and loaded dynamically as they are needed. .
  • Generally though, when we unpack an Apache that has been pre-compiled (i.e. It’s already in a .deb or .rpm package format), it is compiled to be modular.

Basic configuration

  • The core Apache server is configured using one text configuration file – httpd.conf. This usually resides in /etc/httpd, but may be elsewhere depending on your distribution.
  • The httpd.conf file is fairly well documented, however there are additional documentation with Apache that is an excellent resource to keep handy.

The server has 3 sections to the configuration file:

1. The global configuration settings

2. The main server configuration settings

3. The virtual hosts

Global configuration settings

In this part of the configuration file, the settings affect the overall operation of the server. Setting such as the minimum number of servers to start, the maximum number of servers to start, the server root directory and what port to listen on for http requests (the default port is 80, although you may make this whatever you wish).

Main server configuration settings

The majority of the server configuration happens within this section of the file. This is where we specify the DocumentRoot, the place we put our web pages that we want served to the public. This is where permissions for accessing the directories are defined and where authentication is configured.

The virtual hosts section

  • Hosting of many sites does not require many servers. Apache has the ability to divide it’s time by offering web pages for different web sites. The web site www.QEDux.co.za, is hosted on the same web server as http://www.hamishwhittal.org.za.
  • Apache is operating as a virtual host – it’s offering two sites from a single server.

Tux, as originally drawn by Larry Ewing

Image via Wikipedia

Operating System:

  • An operating system is composed of two major parts; these parts are known as the “kernel” and the “userland“.
  • The kernel is responsible for handling communication between the physical hardware and the software running on the machine.
  • The “userland” is comprised of system utilities and user applications. These include editors, compilers and server daemons.
  • System utilities allowing you to maintain,monitor and even upgrade your system are also included.

Linux-Free and open source OS

  • In the Linux operating system,Linux is a kernel, and requires additional software in order to make it an operating system.
  • A Linux distribution is comprised of the Linux kernel, and a collection of “userland” software.
  • The software is usually provided by the FSF23 and GNU24 organisations,as well as many private individuals. Some of it even originates from UCB’s25 BSD26 Unix operating system

Components of the OS:

1.Hardware

This is the physical equipment of the  computer is composed.It includes

  • keyboard and mouse
  • video card and monitor
  • network card
  • CPU and the RAM

Kernel

  • The Linux kernel acts as the interface between the hardware  and the rest of the operating system.
  • The Linux kernel also contains device drivers, which are specific to the hardware peripherals
  • The kernel is also responsible for handling things such as the allocation of resources(memory and CPU time), as well as keeping track of which applications are busy with which files, as well as security; and what each user is allowed to do on the operating system.

Standard Library of Procedures

  • A Linux based operating system will have a standard library of procedures, which allows the “userland” software to communicate with the kernel.
  •  On most Linux based operating systems, this library is often called “libc”.

 Examples

 calls to ask the kernel to open up a file for reading or writing, or to display text on the display,

 even read in keystrokes from the  keyboard.

Standard Utilities and User Applications

  • A Linux based system will usually come with a set of standard Unix-like utilities; these are usually simple commands that are used in day-to-day use of the operating system, as well as specific user applications and services.
  • This is typically software that the GNU Project has written and published under their open source license, so that the software is available for everyone to freely copy, modify and redistribute.

Example Commands, which allow users to edit and manipulate files and directories, perform calculations and even do jobs like the backups of their 

Components of the Operating System

data


Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 5 other subscribers

Subscribe our Blog

Subscribe
CLUSTER MAPS Locations of visitors to this page