accounting-skills

Accounting Skills that go Beyond what you Learn in School

Accounting software has come a long way since the advent of computers or even the internet. As an accountant, it is common to come across many new developments that you did not encounter in school. Accounting is evolving at a tremendous rate, especially when new forms of businesses are coming up with new accounting methods. There are new features and trends of accounting technology that has emerged over the years that were not part of the course content. Some things will arise that you will experience once you become a chartered accountant.

 

Making decisions

When you are an accountant, you will often be in situations where you will make difficult decisions, especially when you are operating your own business. In this case, you would be left with either an option of employing another accountant to deal with the accounting or handling it by yourself. If you do decide to hire another accountant there are definite steps you should follow. You can also choose to rely on an accounting technology that you are familiar with for accounting purposes. Moreover, the decisions would also involve whether the new employee or software would prove as a beneficial addition and is cost effective. In this context, trial and error apply as this would reveal whether your decision is the best choice. You can try out each software or read through the documentation to find something that suits your needs. These software solutions can also prove vital as they ease the accounting process for your business, and they tend to be a cheaper option in comparison to employing an accountant.

 

Networking

While you were a student, networking was not part of the course content. This means that in the real life situation you will often face situations where you have to network or interact with other accountants. This will ensure that you are updated on the latest development in accounting, and it would also reveal ways in which you can improve your practice. Networking is integral if you use cloud-based accounting technology. It allows you to learn about the technology you are using as well as gaining suggestions from other users on how to use the software for your benefit. Accounting plays a role in attracting new businesses. It is a challenging activity that an accountant will learn through experience rather than having the knowledge imparted on them. If you are an introvert, then networking would prove difficult.

business-networking

Confusing social interactions with networking is commonplace. Even though the two might be similar, they have different intentions. Networking occurs in situations where you are marketing yourself or the product you are selling. On the other hand, social interaction often occurs where you build relationships with others, but it does not involve your work. Furthermore, networking can evolve to social interaction, and it would mean that you have developed a meaningful relationship with the other party.

As an accountant, there are many things that you should be prepared to learn by yourself. Even though the knowledge is not something that you will learn in accounting school, it can prove beneficial to your business and professional. It can prove useful in cases where you use cloud-based accounting solutions.

EV

Data Virtualization

Data virtualization, which is a virtual representation of the data instead of a physical representation, is perhaps the most interesting way to increase efficiency in the management of its IT operations. Ken Hess wrote a good article on the subject on the popular IT blog ZD net, for those interested. But many users often forget that the data is the same reason to exist of the company’s IT infrastructure. Also, the fact of addressing the inefficiencies at the data layer often produces the best returns in terms of return on investment. Deduplication, thin provisioning and snapshots are examples of techniques that dramatically reduce or eliminate the physical problems associated with the data, while providing full access and “virtual” to the data.

Take the example of data deduplication. In a storage infrastructure, the same data is duplicated over and over again (stressful passing the capacity limits of all infrastructure and business processes), often for perfectly valid reasons. However, this massive duplication has a negative exponential impact on backup process, data recovery and disaster recovery (DR). By default, most backup tools regularly create complete images of a set of data, which often results in dozens of copies. Profits from disposal of these duplicates can be felt throughout the organization.

For example, by saving to an appliance of disc based deduplication IT take immediate gains in performance due to the very nature of the disk. When the data is deduplicated, they also occupy far less space both at rest and in transit. Thus, deduplicated data at point A, will require much less infrastructure resources (bandwidth, capacity, etc.) to be stored and transferred to point B and so on. Because typical backup deduplication rates can reach 20: 1 or more, the retention of data on the virtual disk can greatly improve the efficiency of backup operations, which improves the quality of service and benefits to trades. This gain in efficiency is further improved because the induced gains in IT operations and time, one of the most precious today in the datacenter.

Regulatory constraints in data storage material for the emails are easy enough to understand. But it is more difficult to determine the most cost-effective way to comply with this mandate. The easiest way to comply with the regulation is to apply a rule stupidly data, store them in one place and never move. Well, I guess the easiest way would be to have it all managed by IT support services in London. Anyway, this way, you can always find them when you need them. But this practice is often inconsistent with the objectives for storage optimization. An email is a representation of data, and early in his life, it may require a level of availability that simply will not be necessary after a certain period of time, for example, when it becomes a fixed archive (persistent) that never change. So it makes sense to store forward this email to the infrastructure platform the most economical, which will probably not the original. This is not because there are constraints retention and archiving the object should be sentenced to ineffective treatment forever. The same logic should be applied to the data without storage constraints and benefits will be similar. Each data object, regardless of its form, will eventually become a persistent, immutable asset that will only rarely consulted. This type of data represents the overwhelming majority of data managed by the company and have radically different requirements than active or dynamic data. Generally, we can apply the same logic dice data they cease to be active and accessible and this, it is stored in an archive or simply on an economic storage class. By duplicating these data, we can more easily and effectively protect, access, secure, and store without it requires superhuman effort from IT staff.

Having less data “live” consuming precious infrastructure resources allows for more virtual instances of that data and derive greater business value. This is a positive cycle that confirms a simple principle: we can do more with less. Again, if you’re having trouble with the subject, don’t stress out about it, you can always outsource the job to professionals who provide IT support services in Soho.

data-recover

PCA and PRA Part 2

PRA: To ensure the orderly restart of applications in case of failure or disaster

For companies that do not have the means or the need for a PCA, the PRA is the solution to ensure an orderly restart and as fast as possible to the company’s IT infrastructure in the event of an incident. This restart is performed generally on a backup site, the company’s ownership or provided by a third-party provider. The PRA defines the architectures, the means and procedures necessary to implement to ensure the protection of applications it covers. Its objective is to minimize the impact of a disaster on the company’s business. There are several restart modes: warm boot is based on a synchronous or asynchronous data copy of the main site. This is to rely on the last known consistent state of data as a basis for servers positioned on the backup site. Data replication (which can be ensured by backup and replication technologies such as Veeam Backup and Recovery, or continuous data replication technologies such as EMC RecoverPoint, AppAssure Dell, IBM Tivoli Storage Manager, CA ArcServe Replication or at CA DoubleTake at Vision Solutions) provides a quick restart standby servers in a state as close as possible to that which preceded the disaster. The RTO (Recovery Time Objective – time to put the application into production – is therefore minimal and RPO (Recovery Point Objective – the time between the last consistent state of the data and the damage) reduced to a minimum, often within minutes .

The situation is somewhat different in case of cold standby. This also concerns many companies do not have the financial and / or technical support for a PCA or a warm restart. In this case, the disaster restart uses the last backups made by the company. These backups can in the best case be replicas from a backup system deduplicated disk-based Data Domain as a bay or in the worst scenario, a simple tape backup.

In case of disaster, the company has to activate its backup site, restore scratch its data from their backup media (disk or tape) and reactivate its applications. This is the most economical solution to develop a PRA but it has a price in terms of RTO and RPO. OTN is the minimum recovery time data and commissioning of servers, to complex environments can mean several hours or even days. The good news is that the trivialization of disk backup solutions like Data Domain has greatly reduced the RTO (from 5:00 p.m. to 2:00 on average according to an IDC study, 2012).

The RPO is dependent on the frequency of backups. In the worst case, it can reach one or more days (particularly for applications with backup windows are long and are not saved once a day or less). Again deduplicated backup discs has improved the situation by reducing backup windows (11 hours with a library of on average 3 hours with tape backup systems deduplicated disk as EMC Data Domain, Quantum DXi or HP D2D).

Note that if the cold start was the rule for many companies, especially SMEs there are still 5 years, the spread of virtualization and networked storage has made available a warm start to a growing number of companies. The PCA is still not necessarily accessible to everyone – even if it has become commonplace for some applications – but it is now accessible to many medium-sized SMEs. The progress made by applications (for example with the integration of failover in most databases), for storage arrays, backup to disk solutions and virtualization technologies should make it accessible to the largest number Over the next few years. And the advent of CAC and PRA cloud services should also help further democratize a little more these technologies

hard_drive_data_recovery

PCA and PRA Part 1

PCA and PRA: the corporate weapon against computer disasters. Few companies today can dispense with them to the point that in many cases IT inability to deal with a computer problem can be fatal. A study by the American consulting firm Eagle Rock, 40% of companies with an interruption of 72 hours of their IT and telecommunications resources do not survive a data disaster. That is why more and more companies of all sizes strive to implement a disaster recovery plan or an IT business continuity plan.

Over time, the meaning of these two terms has evolved. Historically, the continuity plan clung to analyze the potential impact of a disaster or failure on the company’s business and defining the means and procedures to be implemented to limit the consequences. The recovery plan was interested meanwhile IT aspects of the PCA.

For IT professionals, the terminology has evolved increasingly BCP describes all means to ensure business continuity applications, that is to say, to ensure high availability of the applications (which implies the impossibility of stopping these applications even in case of disaster on a site). The PRA meanwhile describes all the means and procedures designed to ensure a rapid and orderly resumption of production after an unexpected shutdown (for example related to a technical failure, or energy, human error or a natural disaster) . The difference between the two approaches tends to be limited to a difference in terms of infrastructure downtime and disaster recovery applications.

PCA: ensure high availability of applications

As part of a PCA, the company ensures define architectures, means and procedures necessary to ensure high availability of infrastructure (data center, servers, network, storage) supporting the implementation of enterprise applications. The objective is to ensure that whatever the situation, the infrastructure put in place ensure users uninterrupted service.

In general, a PCA implementation requires the establishment of facilities between several redundant data centers and operating jointly so that in case of component failure at the primary site, the relay automatically made by the secondary site.

Typically, such an architecture implies the establishment of a scheme guaranteeing consistency on storage arrays between the primary site and the secondary site. One example that allows a solution like EMC VPLEX as the latest generation of berries VSP Hitachi G1000. It is also possible that technology GeoCluster NetApp or HP (3Par). These two technologies actually provide transparent data replication between two or more sites and allow simultaneous write access to data on all sites. Coupled with orchestration and virtualization solutions, where to failover software technologies (Oracle RAC, SQL Server Failover Cluster …), they allow automated switches applications of a data center to the other in case of failure on the primary site.

Note that all enterprise applications are not necessarily concerned with the implementation of a BCP, just because some are not deemed critical and can tolerate a stop, or a possible loss of data. This criticality is defined collaboratively with the business to determine what the scope of the PCA and which applications will be affected by a “simple PRA”. It should also be properly sizing the infrastructure for failover to the secondary site does not affect the performance too. In the case of an architecture in active / active mode, production is indeed divided between the two data centers of the company, so a disaster on one of them translated mechanically by a decrease half the processing capacity available, thus potentially a performance degradation of the surviving infrastructure.