PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 4th of April, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:20:10] <richthegeek_> i'm representing iso terrain (game map) in mongo using a geo index .. as each position is unique, is it possible / clever to use the _id field both as the UUID and as a geo index?
[00:26:13] <rkgarcia> a field can be used in multiple indexes richthegeek
[00:28:20] <richthegeek_> rkgarcia: sure, just making certain doing it on the ID won't cause horrible side effects
[00:28:36] <richthegeek_> I know, certainly, that I need to ensure the field ordering is always the same
[00:33:59] <richthegeek_> hmm, the _id field cant be an array, but seems like 2d indexes require an array to work properly on the $near queries :s
[00:39:05] <richthegeek_> not a *massive* issue, could store the positions as either 0-padded (000130004) or delimited (13,4) strings ... shame really!
[00:49:40] <richthegeek_> scratch that, i was just querying wrong - so you *can* create a 2d index on the _id field
[06:23:00] <greenmang0> what is the right method to remove a secondary from a replica set, is there any way to avoid re-elections?
[06:23:11] <greenmang0> i generally use rs.remove()
[06:23:45] <greenmang0> but it makes all the members secondary for a while (2/3 seconds) causing reads/writes fail
[06:26:29] <joannac> Unfortunately, no
[06:26:57] <joannac> If it's really critical, you could just shut the secondary down, and do the rs.remove() later, when you can take the downtime
[06:27:11] <joannac> but you'll get a lot of messages in your log in the mean time
[06:30:21] <greenmang0> joannac: messages in logs are fine, i am more concerned about reads/writes failure
[06:48:08] <proteneer> my insertion is ignoring the _id field
[06:48:13] <proteneer> and insists on inserting its own ObjectId
[06:48:37] <proteneer> (using and upsert)
[06:48:41] <proteneer> an upsert*
[06:49:07] <joannac> proteneer: erm. pastebin the update statement
[06:49:37] <proteneer> self.mdb.servers.ccs.update({'_id': self.name}, {'host': external_host}, upsert=True)
[06:49:54] <proteneer> this behavior isn't in Mongo 2.6.0rc
[06:49:58] <greybrd> Nodex & samurai2: Thanks for your inputs. .finally I settled with tailable cursor. still in evaluation stage.. need to be approved...
[06:50:27] <joannac> proteneer: if the _id doesn't exist, how would mongodb know?
[06:50:41] <joannac> (what to set the _id to)
[06:50:55] <proteneer> ah
[06:51:25] <proteneer> i wanted it to set what the query _id is
[06:51:34] <proteneer> and it intact sets the _id to the query's _id in mongodb 2.6c
[06:51:42] <proteneer> infact8
[06:51:48] <proteneer> as opposed to generating an ObjectId
[06:55:58] <joannac> Yup, new behaviour
[06:57:11] <joannac> wow, that's awesome
[06:59:01] <proteneer> wow, that's subtle
[06:59:11] <proteneer> (and annoying)
[07:02:13] <joannac> The new behaviour is. I predict confused people :(
[07:02:33] <joannac> I was excited about the new response you get to an update command
[07:06:11] <proteneer> its INCREDIBLY subtle
[07:09:30] <joannac> proteneer: the new behaviour is incredibly subtle, or the old?
[07:09:32] <joannac> or both?
[07:10:13] <bartjens> hi
[07:11:50] <bartjens> someone here?
[07:12:01] <joannac> Yes. Do you have a question?
[07:21:24] <bartjens> hi joannac
[07:21:27] <bartjens> yes i have
[07:22:03] <rspijker> would you like to ask it?
[07:22:56] <bartjens> is it wise to store millions of records in 1 table ?
[07:24:51] <rspijker> bartjens: there is nothing wrong with storing millions of _documents_ in 1 _collection_
[07:26:11] <asido> speaking of differences between transactional and non-transactional db engines, is it true for every single type of query that transactional writes it to binary log only when it commits it or is valid specifically only for transactions?
[07:28:22] <bartjens> oke thnx rspijker
[07:28:23] <kali> commiting implies transaction
[07:28:37] <kali> so i'm not sure i understand what you're asking
[07:29:21] <kali> i don't mind too much the offtopicness, but i hope you're aware of it... you may find people more litterate about this somewhere else
[07:30:20] <asido> kali, I am figuring whether DB can do commit when performing simple CRUD operations equivalent to plain INSERT, UPDATE, DELETE, SELECT SQL queries. if not, then it I guess such queries have no difference in isolation level when you are running then on transactional and non-transactional engine
[07:31:22] <asido> kali, perhaps transactional db engine translates the queries in it's own style making it transaction-like. that's what I don't know
[07:31:47] <kali> asido: well, iirc, the canonical relational "therory" says that everything have to be performed in a transaction
[07:31:56] <kali> asido: including read
[07:32:47] <kali> asido: but then, that is more or less equivalent to a single shared lock on your database, so nobody implements that in real life
[07:33:39] <kali> asido: i remember that in oracle shell (sqlplus), when you were performing an update/insert/delete, the shell was issuing an implicit BEGIN
[07:33:59] <kali> asido: so without a commit at the end, your writes were lost when closing the session
[07:34:47] <kali> asido: other engines, like psql and mysql were more relaxed, maybe just behaving like you say, wrapping every write request in its own tx
[07:34:55] <joannac> From my DB2 days, there was an implicit COMMIT at the end of every operation
[07:35:49] <kali> joannac: yeah, i think oracle was the exception there
[08:26:10] <shaisnir> Hi all, I have an issue with sessions. Locally they are stored no problem but on the VPS, sessions are not being saved. any idea?
[08:28:11] <kali> what ?
[08:28:41] <shaisnir> kali: referred me?
[08:29:17] <Nodex> what do sessions have to do with Mongodb?
[08:29:56] <kali> shaisnir: yes.
[08:32:30] <shaisnir> locally a sessions collection is being filled but in the VPS it remains empty for some reason :/
[08:34:50] <kali> it's highly unlikely the cause is within mongodb or within the virtual machine. you probably have broken some configuration of your app somewhere
[08:38:13] <shaisnir> 10x
[10:12:10] <Aartsie> Hi all
[10:12:44] <Aartsie> When i do db.collection.remove() it wont be clear the disk space ?
[10:15:12] <rspijker> Aartsie: It will not
[10:15:23] <Aartsie> how can i do this ?
[10:15:41] <Aartsie> our disk is used for 84% and 50% is old data :P
[10:15:42] <rspijker> only with a repairDatabase()
[10:15:57] <rspijker> if you remove it, the old space will be used again
[10:16:02] <rspijker> (especially if you compact)
[10:16:08] <rspijker> but it won't be given back to the fs
[10:16:42] <Nodex> thta;s not advisable on a production system
[10:16:47] <Nodex> that's *
[10:17:37] <rspijker> true, but sometimes there is no easy way around it
[10:17:43] <Aartsie> hmm are there any other solutions to get the free space ?
[10:18:23] <rspijker> back to the fs?
[10:18:24] <rspijker> nope
[10:18:53] <Aartsie> but with a repairDatabase it will use the journals right ?
[10:23:17] <Aartsie> how do you guys think about http://docs.mongodb.org/manual/reference/command/compact/#dbcmd.compact ?
[10:26:08] <greenmang0> why do I get "assertion 13436 not master or secondary; cannot currently read from this replSet member ns:config.settings query:{}", i just provisioned a new replica member and added it to replica set,
[10:26:43] <greenmang0> it successfully synced with primary and now in secondary state as expected, but while syncing i kept getting assertion 13436
[10:40:01] <Aartsie> hmm got a big problem, i have not enough disk space for a repairDatabase
[10:40:06] <Aartsie> any solutions ?
[10:41:15] <Nodex> put some bigger disk in?
[10:41:57] <Aartsie> hmmm its already 1 TB :P
[10:43:27] <greenmang0> Aartsie: what other option could there be?
[10:43:54] <Aartsie> maybe a command i can use without disk space
[10:45:47] <greenmang0> hmm, adding/expanding disk is the only documented option i guess
[11:03:48] <Doxin> what's the best way to find all documents with their timestamp past a certain date?
[11:04:49] <sybrek> hi .. does anybody have informations about how to use mongodb as embedded database ?
[14:56:40] <VarunVijayarghav> Hey all, we have an issue with mongos + pymongo in our infrastructure. For some reason - if we make a change to the mongo shards - our MongoClient in pymongo does not seem to "detect" the change until its restarted
[14:57:40] <VarunVijayarghav> For example - the other day we had to drop a database - and start populating it from scratch with new indexes. But the reader processes of the database kept getting an empty response even after the db was populated
[14:58:28] <VarunVijayarghav> After we detected that this was going on we restarted the mongos and the reader processes - and then it fixed itself
[16:03:28] <saml> how do I do SELECT uri AS canonicalUrl from tb; in mongodb? projection AS part
[16:04:15] <saml> db.c.find(query,{uri: {$as: 'canonicalUrl'}}) works. thanks.
[16:05:30] <saml> error: { "$err" : "Unsupported projection option: $as", "code" : 13097 }
[16:05:39] <saml> waht am i gonna do?
[16:05:43] <saml> are you in agile scrum meeting?
[16:05:51] <saml> no time for meeting. come to irc
[16:05:54] <_boot> if it's like aggregation try find(query, { uri: "$canonicalUrl" })
[16:06:15] <_boot> no idea though
[16:06:46] <saml> i don't think it's supported
[16:10:26] <kali> no, it wont work with find
[16:11:23] <kali> but it's so easy to do at the application level... what problem are you trying to solve ?
[16:12:18] <synesp> question about data structure
[16:12:25] <synesp> lets say I have data like this: http://postimg.org/image/9hodcx98v/
[16:12:54] <synesp> I need to get the key name in regions (so R1, R2) based on a given zip code
[16:13:05] <synesp> how can I query this? how can I restructure my data
[16:13:32] <synesp> I could have any number of regions, so R1 - R40 lets say
[16:18:56] <palominoz> hello, is it possible to update an entire collection of documents like { test: [ 'object-id'] } to documents like { test: 'object-id' } with the mongo shell?
[16:22:33] <Nodex> stringify the objectId?
[17:20:23] <cheddar> Derick, joannac, mstearn, Number6, if I have a MongoClient with an insert operation going on and I call close() on the MongoClient. Will it interrupt the insert operation or will it wait for that to complete before closing?
[17:20:38] <cheddar> This is using the node.js client
[17:21:31] <cheddar> the docs say, "Close the current db connection, including all the child db instances. Emits close event if no callback is provided." Which doesn't answer this question
[18:20:55] <slikts> has mongodb ever herped so hard that it derped?
[18:25:46] <cheeser> http://static.gamespot.com/uploads/original/332/3327979/2417454-8876767872-laugh.png
[18:38:22] <ddod1> [open question; using node-native] If I have a use-case like a blog host where people can change their subdomains and other fields that need to be unique, what would be the best way to prevent conflicts? Should I set the fields with "ensureIndex: unique" or manually check to see if those fields exist in the collection?
[18:43:46] <lorgio> Has anyone done any work with Hadoop<=>Mongo MapReduce with Ruby?
[18:50:30] <lorgio> has anyone used the mongo-hadoop connector?
[18:53:13] <cheeser> what's up?
[19:01:56] <lorgio> i'm trying to use the Ruby Connector
[19:02:34] <lorgio> and the streaming has a ruby generator to setup a map and reducer and be able to connect
[19:02:44] <lorgio> but i can't get a working model...even from the examples
[19:03:55] <lorgio> i'm not sure if streaming-assembly is still being supported or not
[19:04:24] <lorgio> there isn't really any documentation anymore..there are links IN the documentation, but they go to the general Mongo-Connector page
[19:05:40] <lorgio> i just get a "Exception in thread "main" java.lang.ClassNotFoundException: com.mongodb.hadoop.streaming.MongoStreamJob " whenever i run it
[19:06:49] <lorgio> cheeser: any insights?
[19:14:22] <cheeser> lorgio: yes. that was a bug introduced a while back that made it's way in to at least the 1.2 release
[19:14:42] <cheeser> go to wherever your streaming jar is and run this script: https://gist.github.com/evanchooly/9981291
[19:22:40] <lorgio> does this only affect the streaming assembly jar or all of them?
[19:22:44] <pootieta1g> hey guys, I'm testing uptime, the server monitoring tool and I'm getting the following error from MongoDB: MongoDB error: auth fails - Make sure a mongoDB server is running and accessible by this application
[19:23:21] <pootieta1g> I don't know much about mongo yet, just got started
[19:27:50] <cheeser> lorgio: just streaming
[19:34:03] <lorgio> cheeser: the ruby documentation says to use the streaming-assembly, So i changed it to the regular streaming, and i ran your script..which had an output of "added manifest"
[19:34:08] <lorgio> but I don't get an error
[19:34:26] <lorgio> Exception in thread "main" java.lang.ClassNotFoundException: -mapper
[19:34:34] <cheeser> which ruby documentation?
[19:35:44] <lorgio> https://github.com/mongodb/mongo-hadoop/tree/master/streaming/language_support/ruby
[19:37:06] <cheeser> yeah, that "assembly" nomenclature is no longer used with the build change. i'll update those docs
[19:39:12] <lorgio> so i'm trying to run this command against the mongo-hadoop-streaming.jar...but i'm getting an error with -mapper
[19:39:28] <lorgio> hadoop jar $HADOOP_HOME/libexec/share/hadoop/mongo-hadoop/mongo-hadoop-streaming-1.2.1-SNAPSHOT-hadoop_2.3.jar \
[19:39:28] <lorgio> -mapper ./mapper.rb \
[19:39:28] <lorgio> -reducer ./reducer.rb \
[19:39:28] <lorgio> -inputURI mongodb://127.0.0.1/mongo_hadoop.yield_historical.in \
[19:39:28] <lorgio> -outputURI mongodb://127.0.0.1/mongo_hadoop.yield_historical.out.ruby \
[19:39:29] <lorgio> -inputformat com.mongodb.hadoop.mapred.MongoInputFormat \
[19:39:30] <lorgio> -outputformat com.mongodb.hadoop.mapred.MongoOutputFormat \
[19:49:52] <lorgio> cheeser: was I suppose to run the unmain.sh script on the mongo-streaming.jar or the hadoop-streaming.jar?
[19:55:43] <cheeser> it'll look something like: mongo-hadoop-streaming-1.2.1-SNAPSHOT-hadoop_2.3.jar
[19:59:35] <lorgio> cheeser: i'm still getting an error, but not it's beause of "-mapper"
[21:01:07] <slikts> what would be a good example of a project using mongoose?
[21:53:56] <proteneer> how do I query a collection where I want find _ids that match a list of ids?
[21:54:21] <proteneer> eg candidates = [joe_bob1, job, blarg]
[21:55:01] <proteneer> $in?
[22:16:34] <farahduk> hola
[22:16:41] <farahduk> hola
[22:18:11] <farahduk1> hola