User avatar
support
Site Admin
Posts: 3021
Joined: Fri Feb 07, 2003 12:48 pm
Location: Melbourne, Australia
Contact:

Postby support » Fri Nov 23, 2012 8:38 pm

jakeuk wrote:One more question, sorry! Is it possible to put more than one of the XML feeds in a single Python script? Thereby giving more than one total in the notification balloon. Actually it would be great to be able to add the numbers of two feeds together and show the total.

This would be trivial to do. Post the script you've got now and I'll post an updated version. Or maybe the Awasu Monster will beat me to it - it's 3:30 in the AM for me here :|

jakeuk wrote:My life is transformed!

Now there's a quote for the testimonials file :-)

jakeuk
Posts: 24
Joined: Thu Feb 21, 2008 5:56 pm

Postby jakeuk » Fri Nov 23, 2012 9:45 pm

Well maybe transformed is overstating it a little, but I am very happy.

Once again I can't publicly post the xml links so I have replaced them with "link 1" in the code from the .py below. Obviously I'd like to add a "link 2" and if possible print a total of "link 1" + "link 2" (or rather the values of).



import urllib2
import re
# get the data value
buf = urllib2.urlopen( "link 1" ).read()
mo = re.search( "<STREAMINGPLAYERS>(.*)</STREAMINGPLAYERS>" , buf )
val = mo.group( 1 )

# geenrate the feed
print "<rss>"
print "<channel>"
print "<item>"
print "<link> "link 1" </link>"
print "<title>" , val , "</title>"
print "<description>" , val , "</description>"
print "<guid isPermalink=\"false\">" , val , "</guid>"
print "</item>"
print "</channel>"
print "</rss>"

User avatar
kevotheclone
Posts: 239
Joined: Mon Sep 08, 2008 7:16 pm
Location: Elk Grove, California

Postby kevotheclone » Sat Nov 24, 2012 2:12 am

jakeuk wrote:My life is transformed!
I agree! Awasu has transformed a portion of my life for the better! :)

Of course there's more than one way to achieve this goal, and my version is probably not the most elegant, but it seems to work, providing the total in the feed item's title and feed-by-feed counts in the feed item body.

I'm not sure what to do with the <link/> element, so I just pointed it to the first URL.

jakeuk, I know you wanted to see these values in the popup notifications, but another (or maybe an additional) alternative is to use multiple instances of the earlier version of this plugin channel so that each radio station's data is separate, then you can create a Report in Awasu that periodically exports the data into an application like Excel, where you can use SUM, MIN, MAX, MEAN, PivotTables, etc.

Anyway, try creating a new Channel, and give this a try...

Code: Select all

import datetime
import re
import urllib2

# define the URLs as a Dictionary with an initial counts of zero
# add more links as necessary
urls = { "link1" : 0 , "link2" : 0 , "link3" : 0 }

# guid uses date time, so no chance of duplicates: 2012-11-23 17:35:56.968000
guid = datetime.datetime.now()

description = "" # intilize to empty string

# get the data value
for link, val in urls.iteritems() :
    buf = urllib2.urlopen( link ).read()
    mo = re.search( "<STREAMINGPLAYERS>(.*)</STREAMINGPLAYERS>" , buf )
    val = mo.group( 1 )
    urls[link] = val # update the Dictionary with the actual value
    description += "<p>" + link + ": " + str(val) + "</p>" # build feed item description

# sum all of the individual listener counts
sum = sum(urls.values())

# generate the feed
print "<rss>"
print "<channel>"
print "<item>"
print "<link>" + urls.keys()[0] + "</link>"
print "<title>Total listener count: " , sum , "</title>"
print "<description><![CDATA[" , description , "]]></description>"
print "<guid isPermalink=\"false\">" , guid , "</guid>"
print "</item>"
print "</channel>"
print "</rss>"

jakeuk
Posts: 24
Joined: Thu Feb 21, 2008 5:56 pm

Postby jakeuk » Sat Nov 24, 2012 11:07 am

Many thanks Kev. I tried a new Python script using the above but got the error "Can't determine the feed type".

I'm not sure if you have the URLs that I sent to Taka via email, but they are .XMLs if that has any bearing on the situation. This was where the error came up. It's line 23 in the script that I made:

sum = sum(urls.values()) TypeError: unsupported operand type(s) for +: 'int' and 'str'

I used 2 URLs to try it out and put them in place of the "link1" and "link2". I tried both with and without the speech marks. The above error message is the one generated with them. Without them the error displays the URL in each case.

User avatar
kevotheclone
Posts: 239
Joined: Mon Sep 08, 2008 7:16 pm
Location: Elk Grove, California

Postby kevotheclone » Sat Nov 24, 2012 5:26 pm

It might be that one of the links isn't returning a number and that's why the sum() function is failing.

Also, looking back I see that I probably shouldn't have used the "sum" for a variable name since it's also a built in function name, although it didn't cause a problem for me, it's not a "best practice".

I think the final script should have some error handling in the urllib2.urlopen( link ).read() and mo = re.search() section in case an HTTP error occurs. Plus the total = sum(urls.values()) line needs to be update to ensure that it does not try to sum a non-numeric value.

I don't have time for to do this update this morning, but you might want to try this script, just for grins, it's what I used to test with.
Since I didn't have access to your URLs and I didn't have time to create a web page that could create the same XML as your feeds, I modified the script to generates a random number between 1 and 20 for each "feed". Refresh it as often as you like to see the results. I'll try to take a more detailed look at this tonight when I have more time.

Code: Select all

import datetime
import random
import re
import urllib2

# define the URLs as a Dictionary with an initial count of zero
# add more links as necessary
urls = { "link1" : 1 , "link2" : 2 , "link3" : 14 }

# guid uses date time, so no chance of duplicates: 2012-11-23 17:35:56.968000
guid = datetime.datetime.now()

description = "" # intilize to empty string

# get the data value
for link, val in urls.iteritems() :
##    buf = urllib2.urlopen( link ).read()
##    mo = re.search( "<STREAMINGPLAYERS>(.*)</STREAMINGPLAYERS>" , buf )
##    val = mo.group( 1 )
    urls[link] = random.randint(1, 20) # update the Dictionary with the actual value
    description += "<p>" + link + ": " + str(val) + "</p>" # build feed item description

# sum all of the individual listener counts
total = sum(urls.values())

# generate the feed
print "<rss>"
print "<channel>"
print "<item>"
print "<link>" + urls.keys()[0] + "</link>"
print "<title>Total listener count: " , total , "</title>"
print "<description><![CDATA[" , description , "]]></description>"
print "<guid isPermalink=\"false\">" , guid , "</guid>"
print "</item>"
print "</channel>"
print "</rss>"

jakeuk
Posts: 24
Joined: Thu Feb 21, 2008 5:56 pm

Postby jakeuk » Sat Nov 24, 2012 6:23 pm

Much appreciated. Thanks.

User avatar
kevotheclone
Posts: 239
Joined: Mon Sep 08, 2008 7:16 pm
Location: Elk Grove, California

Postby kevotheclone » Mon Nov 26, 2012 8:30 am

Now that I had more time to look at this, I realize that the error you received was because your values are actually string data type even though they contain a number, they need to be explicitly converted to a number, where in my random number example I was always dealing with a integer data type.

This code should do the trick, I didn't add any error handling code around the urllib2.urlopen( link ).read() and mo = re.search() lines, I'm pretty sure it may need it, but it's not your immediate problem and I'm still short of time.

I did change total = sum( urls.values() ) to this total = sum( [ int(val) for val in urls.values() if val.isdigit() ] ) checking to see if the value could be converted to a integer and if so explicitly converting it to a number.

I also added a <pubdate/> element to the feed items just in case you do consider exporting this to a database or Excel to track "listener trends" over time.

Code: Select all

import datetime
import random
import re
import urllib2

# define the URLs as a Dictionary with an initial count of zero
# add more links as necessary
urls = { "link1" : 0 , "link2" : 0 , "link3" : 0 }

# guid uses date time, so no chance of duplicates: 2012-11-23 17:35:56.968000
guid = datetime.datetime.now()
pubDate = datetime.datetime.strftime(guid, "%a, %d %b %Y %H:%M:%S -000")

description = "" # initialize to empty string

# get the data value
for link, val in urls.iteritems() :
    buf = urllib2.urlopen( link ).read()
    mo = re.search( "<STREAMINGPLAYERS>(.*)</STREAMINGPLAYERS>" , buf )
    val = mo.group( 1 )
    #urls[link] = random.randint(1, 20) # update the Dictionary with the actual value
    description += "<p>" + link + ": " + str(val) + "</p>" # build feed item description

# sum all of the individual listener counts
total = sum( [ int(val) for val in urls.values() if val.isdigit() ] )

# generate the feed
print "<rss>"
print "<channel>"
print "<item>"
print "<link>" + urls.keys()[0] + "</link>"
print "<title>Total listener count: " , total , "</title>"
print "<description><![CDATA[" , description , "]]></description>"
print "<guid isPermalink=\"false\">" , guid , "</guid>"
print "<pubDate>" , pubDate , "</pubDate>"
print "</item>"
print "</channel>"
print "</rss>"

jakeuk
Posts: 24
Joined: Thu Feb 21, 2008 5:56 pm

Postby jakeuk » Mon Nov 26, 2012 11:02 am

Thanks again for your help on this Kev. Sadly it's still unable to determine the feed type. Both of the feeds I'm using are definitely working.

User avatar
kevotheclone
Posts: 239
Joined: Mon Sep 08, 2008 7:16 pm
Location: Elk Grove, California

Postby kevotheclone » Tue Nov 27, 2012 7:47 am

Ok I created a couple of simple PHP pages to test this I'm this sure this will work.

Code: Select all

import datetime
import random
import re
import urllib2

# define the URLs as a Dictionary with an initial count of zero
# add more links as necessary
urls = { "link1" : "0" , "link2" : "0" , "link3" : "0" }

# guid uses date time, so no chance of duplicates: 2012-11-23 17:35:56.968000
guid = datetime.datetime.now()
pubDate = datetime.datetime.strftime(guid, "%a, %d %b %Y %H:%M:%S -000")

description = "" # intilize to empty string

# get the data value
for link, val in urls.iteritems() :
    buf = urllib2.urlopen( link ).read()
    mo = re.search( "<STREAMINGPLAYERS>(.*)</STREAMINGPLAYERS>" , buf )
    val = mo.group( 1 )
    urls[ link ] = val
    description = description + "<p>" + link + ": " + str( val ) + "</p>" # build feed item description

# sum all of the individual listener counts
total = sum( [ int( val ) for val in urls.values() if val.isdigit() ] )

# generate the feed
print "<rss>"
print "<channel>"
print "<title>Radio Station Listener Count</title>"
print "<item>"
print "<link>" + urls.keys()[ 0 ] + "</link>"
print "<title>Total listener count: " , total , "</title>"
print "<description><![CDATA[" , description , "]]></description>"
print "<guid isPermalink=\"false\">" , guid , "</guid>"
print "<pubDate>" , pubDate , "</pubDate>"
print "</item>"
print "</channel>"
print "</rss>"

Fingers crossed...

jakeuk
Posts: 24
Joined: Thu Feb 21, 2008 5:56 pm

Postby jakeuk » Tue Nov 27, 2012 12:29 pm

Working like a dream :)

Thank you soooo much.

I might tackle the data export thing at some point, as I have been playing around with Google Drive Spreadsheets lately. But I'd better get some work done right now though.

Many thanks again guys. This is awesome.


Return to “Awasu - Extensions”

Who is online

Users browsing this forum: No registered users and 1 guest