Archive for January 2013
Developers are in a unique position to educate and to capitalize on cloud opportunities. Unlike learning new programming techniques or Frameworks, cloud learning moves beyond development. There are infrastructure aspects to consider as well as potential organizational process and policy changes. However, developers know the application and cloud administration is a much lower bar, than, for example network administration. If you’re looking for a strategy to follow to cloud enlightenment; you’re reading the right article.
Give the Cloud a Whirl
When it comes to the cloud, don’t wait for the storm to hit you, but rather educate yourself; there is no substitute for experimentation and hands-on experience. Start by separating reality from marketing. Almost every cloud vendor offers a free trial. For example: Microsoft Azure offers a free trial. If you are truly new to cloud development; imagine borrowing a company server for 3 months; only there is no setup time. Just turn in on and away you go.
Given that experimentation time is limited; go for breadth rather than depth. Get a taste of everything. What most developers find is; after some initial orientation and learning the experience becomes what they already know. For example: Azure has an ASP.NET based hosting model called Web Roles. After configuring and learning Web Role instrumentation, the development experience is ASP.NET. Learning Azure Web Roles amounts to learning some new administration and configuration skills; coupled with a handful of new classes. The rest of what you need to know is nothing new if you’ve done ASP.NET!
Developer must keep their time constrained. Struggling for hours with something new is often not worth the effort. One should question wide adoption of something that will be difficult to work with. Cloud offerings are typically not niche or differentiating skills like, for example, SQL Server tuning.
Whatever cloud option a developer starts with; understand the authentication options. Intranet developers typically take authentication for granted. ASP.NET makes authentication look easy. Consider all the moving parts involved in making authentication automatic and secure. Understanding authentication is especially important if parts of an application will live within the organization’s datacenter and within the cloud provider.
Finally, look for the right opportunities to apply these new skills.
Navigating the Fog
Most developers are adept at picking when to jump on new technology and when to pull back. Unlike adopting, for example, a new Web Services approach; adopting a cloud option entails learning a little more administration. The cloud can give a developer total control, but the cost is learning a bit more administration.
Developers may find themselves in new territory here. Typically a “hardware person” selects a machine and a “network person” selects and configures a firewall. Cloud portals make network and server configuration easier, but the portal doesn’t eliminate the configuration role. The public cloud handles the hardware; but the developer must choose, for example, how many; CPUs, servers, and load balancers will be needed. This lowers the administration bar, but also might place the burden on the developer.
The cloud will not be the right option for every project. Give the cloud a fair chance. Decision makers may have two reactions to cloud; outright rejection or wild-eyed embrace. Neither reaction is healthy. There is middle-ground. Don’t let unrealistic expectations set by marketing brochures guide the first project. A developer’s experiences described earlier in the article will be helpful here. Set the bar low. Make the first experience a good experience.
Supplementing with the Cloud
One potential approach is to supplement with the cloud. Let the cloud handle some part of the application. For example: requirements may dictate a web page to handle user registration. Registrations often have deadlines and, given human nature, people often procrastinate. Registration traffic is likely to spike the week or a few days before the deadline. Rather than purchasing servers to accommodate the spike; leaving usage idle for most of the year, do registration in the cloud. Dial up more servers the week before registrations are due and dial the server could back down the week after registrations are due.
Aside from technical change; cloud adoption may require organizational change.
Clouds Don’t Work in a Vacuum
I would bet good money that most developers reading this article have no idea which ports in their organization are closed to incoming TCP/IP connections. However knowing who to ask is far more important than what is known. In some sense every organization is its own private cloud. Networking professionals have been connecting things together longer than developers. Internet performance is considerably different than Intranet performance. Cultivate relationships with whoever operates your Firewall.
Passing through a Firewall is overhead. Your organization’s infrastructure may not be cloud ready. Though if your network people banter about DMZs; chances are your organization’s infrastructure is probably cloud ready. As stated earlier authentication is important to cover; forcing users to authenticate multiple times within an application is intolerable to most users.
Budgeting for servers may be different than budgeting for compute cycles. There may be concern over whether compute cycles will amount to more than purchasing a server or two. There is no shortcut here. Just like any other budgeting a developer must do the math. Again, this may be new territory for developers. Typically developers aren’t asked how much storage an application requires. Typically the storage cost is spread throughout the projects an organization conducts. Budgeting difficulties may be a good reason not to do a project. The upside is; after doing the math a developer will likely find that costs are far below buying the hardware.
The cloud gives a developer control over all components from administration to assemblies. Added control comes with a price. A developer must venture into some new territory. This article provided a path to follow.
What is your opinion on cloud opportunities? Is it worth to give a trial? What is your personal experience in adopting a cloud option? Maybe you have some thoughts to share!
Scala is a statically typed and multi-paradigm programming language that runs on Java virtual machine and provides Java like syntax with a few improvements.
Being a multi-paradigm programming language, Scala allows for mixing multiple programming styles such as object oriented, imperative and functional programming. Whether it’s good or not, it is not a question that could be answered with a yes or no. Supporters would say that programmers can choose from a variety of styles and stick to the best depending on their needs. The others would argue that putting together some features found in many other programming languages can’t work well, mostly by increasing complexity and making a programming language obscure.
Scala code compiles to a byte code and runs on Java virtual machine, thus it is compatible with other Java applications. If most of your code and libraries are from Java world, this can be nothing but a good thing. At the same time some additional complexity in Scala is dictated by assuring compatibility with Java.
For some developers the Scala code is obscure, for others could be a nice and neat form of solving some specific problems.
One more positive aspect of Scala is its conciseness. We all know how Java verbose is and how many boiler code developers write everyday including constructors, getters, variable initialisation (type and generics), semicolons and many others.
Now let’s see some other aspects of working with Scala:
At first sight the performance in Scala is very good, comparable to Java. Scala code compiles to the same byte code as Java does and runs on the same Java virtual machine. Still how fast Scala code is may vary from case to case and usually it’s up to the way how the code is written. The awareness of how to write a high performance Scala code is especially important for Java developers who are not very experienced in both Scala and functional programming.
There are not too many development tools created specifically to serve Scala, but fortunately, thanks to compatibility with Java it doesn’t look so bad. One family of tools hardly usable in Scala are those, which do some sort of code analysis, for instance code coverage and static code analysis. Static code analysis tools are even less usable.
The two worth to mention native Scala extensions are Lift web framework and Akka actors platform.
The majority of libraries and frameworks from a Java world, should be possible to use quite smoothly, which among other things, is thanks to implicit conversions in Scala.
Interoperability between Scala and other programming languages, including Java/C/C++, is very good, mostly because of running on Java virtual machine. What Java may talk to, Scala can do as well. Taking into account support for implicit conversions described above, I would state that Scala is one of the leaders of interoperability among all programming languages.
Developers are provided with all they need to test Scala applications efficiently. To follow a Test Driven Development methodology, they can use popular in Java world Junit tool. If someone is more keen on Behaviour Driven Development then ScalaTest is a way to go.
Monitoring and maintenance
One of the main monitoring tools to analyze production Scala applications is JMX (Java Management Extensions). This tool does its job well when we want to analyse some predefined statistics, but sometimes we need to investigate some aspects of a production application while it’s running and JMX can’t provide all required data. To deal with such scenarios, Java provides Java Virtual Machine Tool Interface (JVM TI) that allows for inspecting running Java applications but it can be used for Scala too.
Scala community, including forum, mailing list and blogs, is not the biggest in the world, but it’s very energetic. Most people who are part of a Scala community are very passionate developers, which are always happy to help others to solve their problems. Books are also great: Programming in Scala book by Marting Oderski, Lex Spoon and Bill Venners.
No one will argue that it is far behind other programming languages such as Java or C/C++. Still while looking for Scala developers we could search for Java developers with Scala interests as well. The learning curve is not massive as Scala follows Java syntax. In such case, at least on experienced Scala developer would help the team adopting new programming language.
Have you moved from Java to Scala? How do you find this language?
Thanks for sharing your opinion!
It goes without saying that today’s market offers a wide array of web PHP frameworks that help programmers to develop applications and systems in a fast and easy way, thus considerably “relieving” their life. Among them are such great frameworks as Yii, Codeigniter, Zend, Cake PHP and Symfony. This article is about PHP framework known as Symfony. Starting as a private project for a company that wasn’t satisfied with the existing PHP frameworks, finally it has evolved into one of the most reliable and widely used enterprise PHP frameworks.
Symfony targets at quickening the creation and maintenance of web applications automating common tasks so that developers can focus only on the peculiarities of an application. It can be used for building robust applications in an enterprise context. This means that you have full control over the configuration: from the directory structure to the foreign libraries, almost everything can be customized. To match your enterprise’s development guidelines, Symfony is “packed” with additional tools helping you to test, debug and document your project.
Though there are a few indispensable conditions necessary for its installation: Linux, FreeBSD, Mac OS or Microsoft Windows, and a web server with PHP 5. The current version 1.2 supports only PHP 5.2 or higher, but the previous versions can be run on PHP 5.0 and 5.1 systems. Unfortunately, like many other new, modern frameworks, Symfony lacks support for PHP4, but, on the other hand, it is compatible with almost every RDBMS (Relational Database Management System) and has low performance overheads.
Developers as well as customers both benefit from using this great framework.
Benefits for business:
Business that decides to use Symfony as their PHP development framework can enjoy many benefits. First of all Symfony is heavily documented and easy to configure on most platforms. It is also easy to extend and it is compatible with many existing business libraries, methods and infrastructure making interaction with existing systems relatively easy, and its stability and long term plans make this framework an ideal choice at a business level. It is a project that will be maintained for a long time.
The only requirement is having PHP 5 installed. It doesn’t require a particular database engine, and by using ORM (Object-relational mapping) it keeps the database and data layers independent. All of this means no headaches for the IT or hosting help desk teams when the development environment is being configured.
Using Symfony will help your business develop better quality web applications, with testable and reusable code that can be adapted to the changing requirements of a business environment. And all this at no cost, since it’s an open source project running on open source technologies. For all those reasons, adapting Symfony can bring a lot of benefits to any web development business.
Benefits for developers:
1. Code testability and reusability:
As Symfony adheres to a strict object oriented language and MVC pattern, it gives programmers the possibility to create more beautiful and testable code. This code can be refactored and reused which saves you plenty of time in the overall development process. The code is heavily documented with PDF versions of books that can be downloaded freely from the official website, which helps to avoid long learning curve for developers.
2. Ease of configuration:
Symfony is easy to configure, as it employs a series of frequent configuration default values thus leaving the programmer to configure only things specific to the application being developed.
3. Development tools:
Symfony includes a wide variety of useful command line tools to help with development and project management. You will use these tools to do things such as automatically generating your Propel model classes (Symfony Framework is independent of database due to the ORM layer “Propel” or “Doctrine”), scaffolding, clear cache, etc.
4. Plug-in Creation:
When you happen to add a feature to the framework, all you have to do is to simply create a plug-in in a directory that you can later copy to all your projects. This is one of its most striking features which saves on time immensely and benefits enormously. Most of time you are able to find a plug-in that implements a specific feature that you may need. If not, you can find something close and customize it to your needs. In case nothing you want exists, it’s really easy to create a plug-in yourself.
5. Advanced cache system:
The framework has extensive built-in caching features. It is easy to configure view caching so that whole pages or fragments are cached. It is also very easy to extend the base cache classes so that you can create in a fast way your own custom caching for data.
Does Symfony fit me?
No matter if you are a PHP guru or new to web application development. You will be able to use Symfony anyway. The most important thing for you to define is the size of your project. If you are planning to develop a simple site not exceeding 5-10 pages, with limited access to a database, then you’d probably better use PHP alone. Because using a web application framework won’t give you much, and using object orientation or a MVC model would probably only slow down your development process. Besides Symfony is not optimized to run effectively on a shared server where PHP scripts can run only in Common Gateway Interface (CGI) mode.
If you are going to develop more complex web apps, then plain PHP is not enough. If you want to develop fast and easy, then Symfony perfectly fits you. More over, if you plan maintaining and extending your app in the future, you need a lightweight and effective code. In this case Symfony is a right solution.
If you want to see how fast and convenient it is to develop with Symfony, visit Symfony’s official website to have a look to a visual demonstration of how this framework works.
Good luck and looking forward to seeing your comments!:)
Posted January 15, 2013on:
ASP.NET seems to have more and more quality options regarding extensible content management systems with each passing year. Depending on your needs, there are excellent options available both with commercial licenses or open source code.
In our blog, we have already tried to gather information about PHP and Java web frameworks, and in this article I`m going to present you a list of open-source CMS for .Net that, in our opinion, are worth taking a look at.
DotNetNuke (DNN for short)
If you are looking for something stable, DDN will be the answer. This CMS has been around for a while and DNN is probably the most well known and popular of all the .NET CMSs presented in my list. It`s is a web content management platform used to quickly develop and deploy interactive and dynamic web sites, intranets, extranets and web applications. It`s available in a free Community and subscription-based Professional, Elite and Elite Premier Editions. Community edition contains most of the features which comprise the other editions, but the support is left up to the community. The Professional Edition gives you support from the DotNetNuke Corporation along with a few more features, and for a (much) increased price, the Enterprise Edition gives you a few more features along with phone support.
Another Asp.net based CMS offering multiple licensing options is the Kentico CMS. The free license requires you to keep the logo and copyright information on your page, but the commercial versions offer support and allow you to work without the branding. This CMS allows building dynamic web sites, online shopping carts, intranets and web 2.0 community sites. Kentico CMS is designed to be easy to use for even novice users, so web development should go fast with someone who is experienced. It has powerful content editing interface – Kentico CMS Desk, which allows a user to edit content and preview it before publishing, also it`s easy to organize content into a tree hierarchy of documents (pages). The hierarchy (content tree) represents the site map and the navigation structure.
Umbraco CMS is free and open source Web CMS built on the Microsoft .NET Framework. It provides a full-featured web content management system that is easy to use, simple to customize and robust enough to run the largest sites such as wired.co.uk and asp.net. Umbraco CMS has recently become very popular with designers and web developers due to the open templating system and ability to build in guidelines that automatically format the content writers provide. Also, it uses ASP .NET “master pages” and XSLT, so there is no necessity to work with a heaped-together templating format. It’s written in C# and is happy to work with a variety of databases, so hosting shouldn’t be a problem.
N2 CMS is an open source lightweight CMS to create simple and user friendly website. N2 CMS contains a package of functional templates with News, Wiki, Photo Galleries, FAQs, RSS, Data Entry, Polls and more. Features include full control of content and nodes, drag&drop, versioning, wizards, export/import, security, globalization and more.
Orchard CMS is Microsoft’s hand in the open source world. It`s community focused and is supported by full-time developers from Microsoft, that develop components and scripts that are open tools for developers to create applications. With the help of Orchard CMSO, it`s possible to create content-driven Websites. While this CMS may be a bit slow and some of the things you’d expect in a more robust CMS might be missing, there’s several fantastic back-end features and it`s a CMS that is worth considering when choosing a technology for your project.
Sitefinity is the most modern .NET web content management platform available on the market today. It offers many enterprise features, and simple, easy-to-use online administration tools for managing your website. The new revolutionary User Interface is very task oriented and simplifies the user interaction with the system. Sitefinity has 6 available license editions ranging from free for personal use, to $499 for small businesses, and custom pricing for the Enterprise and Multi-Site Editions. Currently Sitefinity is responsible for powering thousands of websites. Some of their most prominent government websites include: The White House Federal Credit Union, United States Courts, Downtown Fort Worth, and the Canadian Securities Transition Office. Other customers include: Toyota, Vogue, IKEA, Chevron, Bayer, and Coca-Cola.
Certainly the list of CMSs can go further and further and every CMS has its advantages and disadvantages. I`ll highly appreciate if your share your opinions and experience on using these CMSs and adding your favourite CMSs to the list.
The Android ecosystem has become a dominant force in 2012. Here’s how I see it growing in the coming year.
Brace yourselves: 2013 is upon us, and that means a whole new generation of Android devices, rumors, and expectations.
Android will have a strong showing at CES (Consumer Electronics Show), and the next few months will be littered with new smartphones and tablets. Let’s take a look at some of the trends we can expect in the Android space over the coming year.
This article will touch on many trends in the Android ecosystem, including hardware advancements, vendor decisions, and key events of the year. Given the sheer number of players in the space, there will be much to look forward to in the ever-evolving Android landscape. Indeed, much could be said about any one of these aspects of Android, but we will address them here in broader terms.
Screen size will sharpen and grow
Not long ago, most smartphone screens didn’t exceed 4 inches. Up until the HTC Evo 4G, most Android phones were had 3.2-inch and 3.5-inch displays. Now, thanks to popular handsets such as the Galaxy S3 (4.8 inches) and Galaxy Note 2 (5.5 inches), consumers are becoming used to much larger screens. We’ll continue to see all sorts of screen sizes in 2013, but the standard high-end experience will fall in the vicinity of 4.5 inches. Those of us who are moving into our second and third Android device will expect something at least as big as our current model.
Beyond size, resolution will sharpen. HTC had a leg up with the Droid DNA with a 1080p (versus 720p) resolution, but now nearly every handset maker you can think of is reportedly working on their own 5-inch 1080p HD display for their premium products. Whether you place a lot of importance on pixel density or not, expect screen resolution to be a big buzzword in 2013.
Quad-core will multiply
If you listen to companies such as Qualcomm and Nvidia, then you’re well aware of the fact that quad-core is the new spec hotness, and Android is the vanguard of competition among handset makers all vying for your little green Android dollars.
Gone are the days of big dual-core announcements. If you don’t come to the table with at least four cores of mobile prowess, then you’re not really expecting to compete on the high-end. We should anticipate that the big devices of the coming year will have quad-core 1.5GHz processors or higher, with some even hitting 2.0GHz by the year’s end. Of course, the fight for faster processors might only be relevant on paper; real world practicality is a different animal. It’s one thing to tout the impressive clock speeds or point to a benchmark, but showing the benefits to end users is the most important win.
Play a lot of 3D games? You definitely care about who makes your phone’s CPU. Just want to see what this whole Android thing is all about? Jump in wherever you want, you’ll be just fine.
One area where we may see more improvements is in the phone’s memory and storage. If the previous year saw 2GB RAM emerge for the top-of-the-line memory experience, next year may see us inching toward 3GB RAM. Storage capacities for Android phones (and all phones) will creep up in 2013 as well, yielding 32GB as the standard for mid-range and 64GB becoming common among high-end devices. This will be especially true for those manufacturers opting for internal batteries and removal of external storage, and I expect to see the first handset with 128GB internal storage appear before 2013 is out.
Entry-level phones will benefit
You have to appreciate the trickle-down effect of technology as today’s top devices quickly become tomorrow’s mid-range experience. With that in mind, the $50-$100 Android smartphone of 2013 will be quite an impressive piece of hardware.
Dual-core processors should become the norm for your “basic” Android phone as single-core stuff gets pushed aside. The same may be said of the no-contract handsets, as we’ll continually get more for our money.
As every carrier scrambles to build out its next-gen data network, 4G LTE will be commonplace in Android smartphones. Sure, we’ll get the occasional 3G product every once in a while, but that will diminish with time. This is not to say that 2013 will be the end of 3G Android, but the days of touting 4G LTE as a special feature will pass.
There is always a chance that we’ll see a 3D experience in an Android phone or two, but I have the feeling this is one technology that won’t take off. I’ve yet to run into someone who wants or needs 3D graphics in their mobile device. Sure, it’s a cool feature to show off once in a while, but we’re just not ready to adopt this baby. I get the feeling that we’ll see a new surge in NFC-enabled accessories and technologies in the coming wave of tech conferences. The idea of tap-to-play speakers or media players doesn’t seem like much of a stretch for this year’s biggest mobile conferences, CES in January and Mobile World Congress in late February.
Perhaps the biggest issue facing smartphones with large displays and super-fast processors is battery life. Nobody wants to put their phone away to preserve juice; we bought that big screen for a reason.
Looking ahead to the New Year, we expect to see more handsets come with internal and/or higher capacity batteries. The Droid Razr Maxx HD is still the benchmark for long-lasting batteries, but we should see the gap narrow. To that end, we may see less emphasis on “world’s thinnest” or “lightest” claims.
One device around the world
I cannot tell you how pleased I was when I learned that Samsung was going to adopt one singular form factor for the Galaxy S3 and Galaxy Note 2 across countries and carriers. I’m sure that a number of accessory makers were quite happy with the decision as well. Samsung will employ the same strategy for the Galaxy S4 and will likely have records sales again in the New Year.
As far as other companies going this route, HTC today seems to be the closest. I wouldn’t be surprised if its next flagship model were to hit multiple carriers with a single design. As nice as it was to have fewer models to choose from in the One series, it was still confusing to keep up with the various suffixes — One X, One X+, Evo 4G LTE. “Does my carrier offer that one? What’s the difference between this and that?” Along those lines, LG also seems to be slowly headed in this direction with the Optimus line.
Android comes to new territories
The Samsung Galaxy Camera wasn’t the first digital camera to utilize Android, but the first to tie into carriers.
Nikon, Polaroid, and other camera-makers will dabble a bit with Android backbones and we’ll see smarter shooters in 2013. Pricing will need to come down for mass adoption; however, we will see carriers selling connected cameras in retail stores and online.
We will also see more kid-centric tablets and devices with Android under the hood in the next year. We might as well get used to the fact that Toys R Us and Walmart are going to offer $99 Android tablets.
Once the price point of a generic, knock-off tablet, the $100-$200 price range now offers a decent experience for most. Come this time next year, it will not be strange to see a house with even more Android tablets for a range of age groups.
Shortly after Android became a recognized term in the mobile space; we saw the platform arriving in various electronic devices including microwaves and washing machines.
I don’t think we’ll find too much of that in 2013, but it would not surprise me to see a refrigerator or appliance with a custom touch interface that runs Android. Not a full-blown experience, mind you, but something that gives hardware-makers more flexibility.
There is a chance that we’ll see more Android in the automobile in 2013, but it’ll have competition from RIM’s QNX OS. This won’t be a replicated tablet-like experience with full-on Google Play support but something a little smarter than what we have today. It is easy to picture a 7-inch display that lets users hop from stereo to diagnostics to Google Maps.
Another area that would work well is embedding a tablet in the back of the driver and passenger seat. With more cars offering Wi-Fi connectivity over time, a connected device just makes sense. Don’t be surprised if someone introduces a backseat experience that includes access to social networks as well as casual games such as checkers for road trips. For added fun, pair your Bluetooth game controller and dive into a 3D shooter.
Google I/O and major releases
If the last few years are any indicator, there will be at least five key moments for Android in 2013, starting with trade shows: CES in early January, the international Mobile World Congress in late February, and CTIA in late May. Samsung is also expected to launch its Galaxy S4 flagship phone at a standalone press event, if we follow 2012’s model.
Android’s background OS will continue to gain speed, and the company will introduce new features that again pull away from iOS to set the industry pace. We don’t know much about Android 5.0 quite yet, but we’ll assuredly discover bits and pieces of upcoming features in the months just before Google I/O — especially if Google releases a new Nexus device or two to go along with the latest software build.
2013 will certainly be an exciting year for Android, with the mobile OS surely maintaining its mobile lead.
Mobile market is growing fast but at the same time the competition for the same customers among the providers is expanding too.
As a result even the most devoted customers can switch the providers for low-cost plan, a better user experience or the latest services and devices. The simplest way to be informed with day-today market and face to the new challenges are mobile web analytics services that offer insight necessary for brands and managers to optimize their businesses by monitoring the customer’s activity.
Mobile web analytics tools are similar to traditional web analytics and study the data about the user’s access and their activity to the websites from mobile phone. Data gathered as a part of mobile analytics includes information about the user behavior: number of visits, usage, preferences; location (cities, states, countries and regions the access was made from); technical details (devices, platforms being used), error reporting (the use of current and historical reports to identify the errors that might interrupt the user experience), promotional activities, etc.
Mobile web analytics tools can work in two ways:
– Out-of-network (off-site)– the web analytics collect and monitor the data of how specific sites and applications being used form the mobile phone by its user
– In- network analytics (on-site) – installed with the operator network and monitor all mobile navigation patterns.
The data about the internet pages with out- of -network web analytics services provided with using JavaSript and cookies, while in- network work analytics services give more adequate and clear picture of what users are doing. This means that operators can understand the whole picture of consumer habits and behavior, handset information, and collecting consumer data from visited web pages.
Mobile web with varying capabilities are available at prices ranging from free (Google Analytics, Woopra: free for basic, Facebook Insights, Flurry, AdMob analytics) to tens of thousands of dollars (Mixpanel, Localytics, Kontagent etc). So choosing the right mobile web analytics tool depend on several requirements including the budget.
Below you may find the main categories that taken into account:
• Features – every analytics program has its own characteristics that help to understand the customer better.
• Traffic – it helps to track a large amount of detailed information like who visits the website, what the visitor does (on what icons he/she clicks, what views, etc), and at what point they exit the website.
• Referrals -according to the Web Analytics Association, a referrer is “the page URL that originally generated the request for the current page view or object.” Essentially, this is where your guest came from immediately before arriving on your website.
• Report Stat Intervals– it’s detailed statistics with monthly and yearly reports that shows the day or hour as well.
• Events – according to the Web Analytics Association, an event is “any logged or recorded action that has a specific date and time assigned to it by either the browser or server.”
• Visitor Details -web analytics programs keep track of each visitor to your site. This information can be used to identify target audiences, develop campaigns, or learn what might work better to increase conversions. Detailed geographic information about where the visitor is accessing the website from is also available in most cases.
• File Exporting -most web analytics programs offer a variety exporting options to meet your specific needs.
• Tech Support/Help -web analytics solutions can be very complex, so product support is provided for a period time following the initial purchase.
And now let’s see what actual practices become popular for mobile analysts:
* Bango – tracks visitor’s information, who connected to Wi-FI networks, mobile marketing campaigns and one-click payments. Clients include EA Games, Facebook, Fox, Amazon, CNN and Windows Phone Store.
* WebTrends – web analytics company that also offers solutions for mobile analytics and tracking. Gives insights into in-app ad engagement, session data and conversion history.
* Localytics Mobile App Analytics– enterprise-grade app analytics tools for developers and app marketers, including audience reports, customer insights and insights that help you maximize in-app purchase revenue.
* Flurry – draws on its integration with over 200 million apps to provide developers with app store data across iOS and Android platforms. Measures consumer behavior to help developers better monetize and build more effective apps.
* Mixpanel – claims to have built the most advanced mobile and web analytics platform, analysing 6.2 billion actions every month. Provides insight into app usage and conversion optimisation. Recently launched mobile analytics for the Android platform.
Choosing the right mobile web analytics tool will have a huge impact on the success of a company’s profit grow. It will help to monitor the customer’s activity along with metrics information on mobile device.
It’s a good management strategy in order to get the success and to call out the competitors.
Thank you for your attention and please feel free to share with your experience,
Business Development Manager
Professional Software Development
Let`s start from the brief history of the two.
– Sending HTML page data to server using AJAX;
– Animating HTML element;
– Validating the HTML form;
– Storing user information that may help for Web Analytic, Ad tracking etc.
Mostly jQuery focuses on designers and inexperienced developers, still it could be of interest to experienced programmers as well. Here I will try to enumerate the reasons why:
1) Element’s selecting. Every jQuery operation starts from selecting one or more nodes from the DOM. jQuery’s selection syntax is an interesting hybrid of CSS 1, 2, bits of CSS 3, some XPath and a few custom extensions as well.
3) The $ function. You could say it is not true that that jQuery introduces only one object in the global namespace as there is also a $: the $ symbol is also set up as a shortcut for jQuery. This makes enough gently: if you want to back your former function $ (for example, if you have a piece of code based on Prototype), you can call jQuery.noConflict (), to return to his old the $. At the beginning you could considere the widespread using $ in jQuery is no more than a clever trick. But for some reason thinking of it in terms of the jQuery symbol makes everything seem a lot more sensible
5) Manipulating with DOM. jQuery offers a few smart ways of making large scale manipulations to the DOM. The first is quite surprising: the jQuery function can take a snippet of HTML which it will turn in to a DOM element.
6) The returned beast. Object, which is returned by the selectors jQuery, could be quite interesting. It represents a set of DOM-elements and behaves a bit like an array—it has a length property, items can be accessed by index and (most importantly) Firebug treats it as an array when displaying it in the interactive console. This is a clever illusion; the collection is actually a jQuery object, incorporating a large number of methods which can be used to query, modify and extend the collection of selected elements.
There are three principle categories of jQuery methods: those that manipulate all of the matched elements, those that return a value from the first matched object, and those that modify the selection itself. If you have Firebug you can try these out interactively: use this Insert jQuery bookmarklet first to load the jQuery library in to any page, then paste the code examples in to the Firebug console. I would like to note a convenient symmetry of these methods: they are used for display attributes (taking 2 arguments passed to or from a number of properties of the object), and to read the values of these attributes (if only one argument). This symmetry is used throughout jQuery, which greatly facilitates the storage API.
8) jQuery and Ajax. jQuery has the best API for working with Ajax. The most simple form of an Ajax call looks like jQuery(‘div#intro’).load(‘/some/fragment.html’). This performs a GET request against /some/fragment.html and populates div#intro with the returned HTML fragment. It’s a neat shortcut, but what if you want to do something more advanced like display an Ajax loading indicator? jQuery exposes custom events (ajaxStart, ajaxComplete, ajaxError and more) for you to hook in this kind of code.
9) Extensions. Considering the whole set of features as standard, it is worth noting that uzhaty jQuery version is only 20 KB, and even less if you use archive (. Gz). Additional functionality that extends beyond this delivery can be arranged with the help of extensions that can (and do) to add new methods to an existing jQuery. If you want, you can do something like this: jQuery (‘p’). bounceAroundTheScreenAndTurnGreen(); The extension mechanism in jQuery provides documented methods for adding them to the system. Simplicity and ease of use have attracted a large community of authors such extensions, the extensions directory has more than a hundred examples. Another nice feature is the ability to add your own selectors as well as their own methods. MoreSelectors expansion adds methods like div: color (red), which, for example, selects all div with red text.
10) Several words about leaky abstractions. When studying jQuery with more respect, you could struggle with one philosophical blocker. In certain cases, jQuery uses a truly unique methods to achieve a particular function: some parts (such as source code selectors) of this library look scary. If you do so , it requires an understanding of how the library works. To understand this, you need to know some basic concepts, the differences between browsers and a set of methods, which the library uses to get around them. No library can protect you 100% against weird browser behaviour, but as long as you have a grounding in the underlying theory you should be able to figure out if a problem stems from your own code, your library or the underlying implementation.